Skip to main content

Full text of "Ensovoort: Jaargang 11, Nommer 2. 2007"

See other formats


DEPT. OF LIBRARY SERVICES 
UNIVERSITY OF PRETORIA 
JOURNALS/TYDSKRIFTE 

Call No: ...ïLÁfj. 

fc- 'Y. )/?. V. V.®1> y' ( (.(./. ..3L_ 



Jaargang 11, nommer2, 2007 R50,00 


Perspectives on Academic and Organisational Discourse 
Compiled by Jurie Geldenhuys 



H.G. Butler 
Adelia Carstens 
Alan Cliff 
Kutlwano Ramaboa 
Carol Pearce 
A.S. Coetzee-Van Rooy 
Jurie Geldenhuys 
Henk Louw 
Elizabeth J. Pretorius 
Frans van der Slik 
Albert Weideman 
Johann L. van der Walt 
H.S. Steyn (Jnr.) 
T.J. van Dyk 
L. van Dyk 
H.C. Blanckenberg 
J. Blanckenberg 




Digitized by the Internet Archive 
in 2017 with funding from 
University of Pretoria, Library Services 


https://archive.org/details/ensovoort11unse_0 


CONTENTS 





t 


am&íC'íJ’ k</ W K./* e 

Jaargang 11, nommer 2, 2007 


2 Editorial 

Articles 


4 H.G. Butler: To flout or not to flout - academic writing 
conventions in tertiary education 


Redakáe: 

Johonn Lodewyk Marais (hoofredakteur), 
Renée Morois (uitvoerende redokleur), 
Ank Bekkers-Linssen, 

Johonnde longe, 

KorendeWet, 

Mobel Erosmus, 

Louis Esterhuizen, 

LouisGoigher, 

Jurie Geldenhuys, 

Joon Hambidge, 

Annette Jordaan, 

Leti Kleyn, 

Violo Milton, 

JohonvonWyk 


13 Adelia Carstens: Comprehension of pictures in 
educational materials for HIV/AIDS 

33 Alan Cliff, Kutlwano Ramaboa and Carol Pearce: 

The assessment of entry-level students' academic literacy: 
Does it matter? 

49 A.S. Coetzee-Van Rooy: Functional multilingualism at 
the North-West University: Communication difficulties 
in meetings 

71 Jurie Geldenhuys: Test efficiency and utility: 

Longer or shorter tests 

83 Henk Louw: Moving to more than editing: 

Standardised feedback in practice 

105 Elizabeth J. Pretorius: Looking into the seeds of time: 
Developing academic literacy in high poverty schools 

126 Frans van der Slik and Albert Weideman: 

Testing academic literacy over time: Is the academic 
literacy of first year students deteriorating? 


Redaksieraad: 

Jac Conradie, 

Eep Francken, 

Siegfried Huigen, 

JerzyKoch, 

CH.F. Ohlhoff, 

AJ. Weidemon 

Adres vir bydraes en intekening: 

Die Redaksie, ínsomrl, Posbus 3031 4, 
Wonderboompoort, 0033, Suid-Afriko 
E-pos: hlomiso@mweb.co.zo 


138 Johann L. van der Walt and H.S. Steyn (Jnr.): 

Pragmatic validation of a test of academic literacy 
at tertiary level 

154 T.J. van Dyk, L. van Dyk, H.C. Blanckenberg en 
J. Blanckenberg: Van bevreemdende diskoers tot 
toegangsportaal: e-Leer as aanvulling tot 'n 
akademiese geletterdheidskursus 


Die tydsskrif verskyn twee mool per joor en kos 
R50 per nommer, d.w.s. R100 per joorgong. 
Intekening vir instonsies beloop 
R75 per nommer, d.w.s. R150 per joorgong. 

ínsowortmi met die ondersteuning von die 
Eenheid vir Akodemiese Geletterdheid oon die 
Universiteit von Pretorio uitgegee. 


Applied linguistics and academic literacy 


It is with great pleasure that, as the guest editor of this special edition of Ensovoort, 
I write this introductory note. I wish to record at the outset not only my own 
appreciation, but also that of my department, the Unit for Academic Literacy 
(UAL), as well as that of the various contributors, from several different institu- 
tions of higher education, to the editor-in-chief, Johann Lodewyk Marais, who 
generously afforded us this opportunity. 

We are privileged to be able to bring these contributions together in a volume 
dedicated to issues of literacy generally, and to that of academic literacy in partic- 
ular. All of the contributionp were handpicked either from presentations that 
were heard by members of the UAL, or were invited from researchers that we 
knew were doing new and fresh work within the broader field of applied lin- 
guistics. 

The responsible work that is being done within the sub-field of academic 
literacy almost without fail is related to theoreticaJ frameworks and pursuits within 
the larger field of applied linguistics. Academic literacy itself is no longer the 
domain of those who only wish to do good or to make some overt political state- 
ment. Rather, it is a growing field involving those who intend to go about their 
business of facilitating the language development of students in as theoretically 
and socially responsible a way as possible. As a recent external evaluation of our 
department has indicated, teaching on its own will never be sufficient to give an 
adequate response to the needs of the students at the receiving end of our lan- 
guage instruction: our teaching needs to be informed by research of the highest 
quality in order not to stagnate or deteriorate into smugness. 

I trust that the contributions in this special volume will meet with these and 
with the reader's expectations in this regard. While three of the contributions on 
academic literacy, those of Butler, of Louw, and of Van Dyk et al., deal with the 
designed solutions to low levels of academic literacy in writing instruction and 
e-learning, it is telling that the four others, by, respectively, Cliff, Ramaboa and 
Pearce, Geldenhuys, Van der Slik and Weideman, and Van der Walt and Steyn, 
address the initial diagnosis of the problem: how these levels of literacy are meas- 
ured in the first place. In both sets of contributions, the increasing sophistication 
with which academic literacy levels is measured, and the research-backed solu- 
tions that are designed to address inadequate competence, is very much in evi- 
dence. 


Ensovoort: jaargang 1 1, nommer 1, 2007 


3 


The contribution of Coetzee-van Rooy stays within the domain of language in 
higher education, but examines its use for administrative purposes, while the 
remaining two come at the problem obliquely: Carstens deals with the com- 
prehensibility of health education materials, and Pretorius with early literacy in 
an environment characterised by poverty. 

We are grateful to the referees, who have done an excellent job, and have, 
through their comments, significantly enhanced the quality of the various con- 
tributions. 

Jurie Geldenhuys 

Pretoria, 6 December 2007 



To flout or not to flout - academic writing conventions in 
tertiary education 

H.G. Butler 
University of Pretoria 


To flout or not to flout - academic writing conventions in tertiary education 

As a result ofits pnmary importance as a vehiclefor demonstrating students' 
academic competence in tertiary education, the ability to write acceptable academ- 
ic texts has been a topicfor discussion in the development of academic literacyfor 
a number ofyears. In recent times, such discussion has awarded prominence to 
students' seeming mability to make effedive use of academic discourse in both 
receptive and productive modes in their studies. Whereas one may expect students 
new to the tertiary environment to be relatively inexperienced academic writers, 
one generally expeds postgraduate students with their considerable experience of 
university study to be proficient academic writers. This, however, appears to be an 
erroneous assumption. What complicates the matter further is that academic 
discourse used by academic writers can by no means be considered to be a homoge- 
nous phenomenon in the tertiary academic context. Although one would expect 
writing norms and conventions to be relatively similar across disciplines, 
academics tend to adapt the discourse used in their specific disciplines to suit the 
needs ofthe discipline. Itfollows logically that efforts to support students with 
their academic writing should be cognizant ofwhat is required ofstudents 
(preferably in their specific disciplines). As an attempt at conceptualising relevant 
writing support, this paper explores the notion that academic discourse is governed 
by normative conditions thatfind expression in the conventions and characteris- 
tics ofacademic texts. It further presents a critical discussion ofsome ofthose 
characteristic features that have been mooted as generic to academic discourse. In 
conclusion, it reflects on extreme versions ofcritical literacy theory that encourage 
students to challenge the conventions and dominant norms of academic discourse 
in their writing. 


1. Introduction 

This article has its origin in a recently completed doctoral study on the design of 
solutions for a pertinent applied linguistics problem - the offering of appropriate 
academic writing support to tertiary students. Based on the copious amounts of 
literature available on the topic, it is clear that this problem is not restricted to the 
South African tertiary context, but is a difficulty that students appear to experi- 
ence at universities internationally. The topic for the article comes from a context 
where, being new to the field of academic literacy (and writing) support, I had to 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


5 


conceptualise a writing intervention for postgraduate students at the University 
of Pretoria (UP). One of the more obvious issues at the time was to decide about 
the content and nature of such a course, in other words, what it was that one 
wished to address and how such an intervention had to be designed to provide 
the best possible leaming opportunity for students. As an initial step I set out to 
investigate whether any generalisable features/characteristics could be said to 
form part of academic discourse in order to decide about important aspects to 
include in a generic postgraduate writing course. This paper is an account of the 
questions I felt compelled to ask about some of the more prominent, traditional 
features of written academic discourse contained in various guide books and 
courses on the topic. It is, however, important to mention at the outset that it is 
difficult to generalize about what characterizes academic texts, because of the 
diverse application of these features in different disciplines. 

Being a long time convert to functional linguistics, it was important for me to 
assess prominent textual features for their functionality in the context of writing 
'academically'. I therefore wanted to determine whether such features had real 
functional value in terms of what was accomplished through their use, or wheth- 
er they primarily formed part of a long tradition perpetuated by those in power at 
specific times in specific discourse communities. What mainly prompted this 
enquiry was some of the earlier work in critical literacy (cf. Bizzel, 1992) that 
questioned academic writing conventions on the basis that they were used as a 
device to maintain the status quo with regard to power relations and to exclude 
minority groups from participation in the academic discourse community. It was 
therefore important for me to determine whether such criticism was primarily 
part of a political agenda of select individuals or if academic writing conventions 
were without substantial function inherently, and thus merely part of the ongo- 
ing tradition of writing in a tertiary context. I do believe that many academics 
(usually in disciplines other than language) understandably perpetuate charac- 
teristics/features of written academic texts in their disciplines without really ques- 
tioning their applicability and value for their current context. My greatest con- 
cem is whether we are not merely subjecting students to learn to write 'as we do' 
because of the writing tradition we were subjected to, reverting to a largely irrel- 
evant initiation ritual into the 'world of the academy'. Having said this, I want to 
emphasise that I do not wish to pass judgment on the value of tradition, but 
rather suggest that one should be heedful of tradition and its continued utility for 
changing circumstances over time. 


6 


Ensovoort: ]AARGANG 1 1 , NOMMER 2 , 2007 


2. Textual conventions of academic discourse 

From the preceding discussion it is apparent that if one wants to refer to academic 
discourse as an objective, factual entity on its own, one should be able to say what 
it is and, therefore, which distinctive features (should) characterise such dis- 
course. Determining the academic writing conventions on a textual (lexical, gram- 
matical, stylistic) level for the whole of the academic discourse community would, 
however, be a mammoth task to accomplish. We do of course have access to texts 
produced all over the world in an academic context and should be able to infer 
certain generic features from such texts (this is part of the focus of some of the 
more recent studies in the field of corpus linguistics [cf. Conrad, 1996; Biber, 
Johansson, Leech, Conrad, & Finegan, 1999]). The descriptions of the textual char- 
acteristics featuring in academic writing courses and manuals may be determined, 
therefore, by conducting text investigations. In addition, in determining the typ- 
ical identity of academic discourse, one should ask not only whether such fea- 
tures are limited to academic discourse, but also whether they form part of other 
types of discourse (cf. Hyland, 2000). It is obvious that in the case of a requirement 
such as the formality of academic writing, for example, academic discourse shares 
this characteristic with much occupational writing (technical report writing in 
engineering, for example, as well as much of business communication). A more 
realistic deduction regarding academic writing as a separate discourse would be 
that on a textual level, features also found in other discourses are combined in 
such a way as to form what could be referred to as academic discourse. 

Sometimes the suitability of certain surface structure academic writing con- 
ventions is questionable. At times, there appears to be a mismatch between what 
academic language is supposed to accomplish and the actual language forms that 
are used in written academic English, for example. In addition, some of the tradi- 
tional features of academic texts at times seem to be conflicting in terms of what 
writers supposedly wish to accomplish by employing such features. The follow- 
ing discussion focuses on some of the traditional features of academic discourse. 
Although criticism is offered regarding the value/functionality of some of these 
characteristics, it does not necessarily imply that students should be encouraged 
to flout them deliberately (as is suggested by extreme versions of a critical literacy 
approach, cf. Bizzell, 1992), especially in cases where they are deeply entrenched 
in some disciplinary discourse. The aim of the following discussion, therefore, is 
to assess the value of such features in purposefully contributing to the 'academic' 
quality of written texts, in other words, whether they are used in response to 
some normative condition with specific functionality in the tertiary academic 
context. 


Ensovoort: jaargang 1 1 , NOMMER 2, 2007 


7 


2.1 Formality 

One of the rnore prominent stylistic features found in various guides and work- 
books on acadernic writing is the notion that it makes use of a formal register. As 
a number of authors note (amongst others Swales & Feak (1994), Henning, Gravett 
& van Rensburg (2002) and Coffin et al. (2003)), this can be seen most visibly in the 
choice of lexical items used in this type of discourse, where, for example, if there 
is a choice between a more informal and a more formal word, the default choice 
would usually be the formal option. Swales and Feak (1994) refer to this feature as 
a 'vocabulary shift'. In this regard, the use of words that are characterised as 
colloquialisms and slang language are generally not appropriate in academic 
writing. The important question regarding such formality is whether there is a 
substantial functional difference should students be allowed to use a more infor- 
mal register in their academic texts. After all, apart from the fact that one may 
initially misunderstand student texts in the case of informal writing (which is 
rather doubtful since most academics are usually experienced academic readers), 
one should ask what functional purpose it serves for academic writing to be 
formal. Is the creation of a sense of seriousness and that academics are engaged in 
what may be perceived to be important matters really that important functional- 
ly? It is probably this sense of seriousness, the awareness that one is dealing, 
through language, with issues that are generally true, that Blanton (1998) is char- 
acterising, amongst other things, when she speaks about academic discourse hav- 
ing'authority'. 'Fhe formality that is so often mooted as a characteristic of academ- 
ic discourse no doubt serves to enhance the authoritativeness of the claims made 
in such language. The main point is that formality per se is not a characteristic of 
acadernic discourse, but becornes such a feature when it is used for an academic 
purpose and with academic intent, viz. the condition of appropriateness and 
authoritativeness. The functionality gives a typical academic purpose to the for- 
mality. 

2.2 Conciseness and exactness 

A second feature of acadernic texts is that they are supposed to be as to the point 
and exact as possible. Connected to the feature of formality, the use of indetermi- 
nate/ vague lexical items such as 'thing' and 'something' is, therefore, not usually 
exact enough to be acceptable in academic writing. Along the same lines, verbos- 
ity and redundancy (such as unnecessary repetition) clutter academic argumen- 
tation and are not supposed to be surface features of academic texts. However, it is 
interesting that, for example, the general avoidance of first person pronouns and 
contractions so often mooted as being stylistic features of academic writing (cf. 
Biber et al, 1999) contradicts this convention because such structures are often 
replaced by longer strings of words/letters, rendering the text less economical. 
The avoidance of the latter, however, supports the feature of formality referred to 


8 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


above, since they are usually associated with more casual, informal, less severely 
constrained forms of discourse, such as conversations among equals. Again, in 
this case, the condition of clear argumentation governs the use of language in 
academic texts. 

2.3 Impersorulity 

Coffin et al. (2003:29) remark that: "For much of the twentieth century, particular- 
ly in the sciences, the notion of objectivity meant that there was no place for a 
personal voice." One needs to question whether it really leads to more objective 
writingif one refers to oneself as 'the author' and not T, for example. Therefore, 
does it lead to more subjective writing if the author personalises a text? Is the 
quality of research not rather to be found in how the research was structured and 
conducted, or in its content? In fact, the more recent view is that for students to 
become competent authors of academic discourse, they need to achieve their 
own 'voice', i.e. express their own identity (cf. Ivanic & Simpson, 1992). Again, an 
(emerging) material condition finds expression in the formal features of language, 
i.e. when we actually encourage (newly initiated) academic writers to use the 
personal pronoun. And without 'voice', there is no critical thinking, the hallmark 
of academic reasoning. It is important to note though that there are instances 
where students 'hide' behind their personal voices. Reading personalised aca- 
demic texts, it becomes clear that such students are rather uninformed about 
related literature - they therefore merely state what they know about an issue. It 
is thus important that students be made aware of a balanced argument that incor- 
porates authoritative views on issues as well as their own. 

In a related issue, the use of the passive is normally supposed to make writing 
more impersonal (an important traditional feature of academic writing), yet sourc- 
es on academic writing differ about whether using passives is a good practice in 
such writing. Academic texts are written at and for different levels of accessibility, 
and we may therefore in some cases wish to avoid passives in order to write more 
intelligibly. 

2.4 Nominalisation 

Another important feature of academic discourse is the degree of nominalisation 
that typically characterises such texts. Ventola (1998:68) maintains that scientific 
language has evolved over time to suit the needs of those who practice it. She 
explains this change as follows: 

The grammar of scientific language has changed as reporting about scientific 

experiments and processes have developed. Thoughts are now foregrounded. 

Dynamic actions have become static, intellectualised, when grammatical roles 

have changed, through nominalization, from processes or events into things. 


Ensovoort: iaargang 1 1 , nommer 2, 2007 


9 


The discourse act that involves the nominalisation of processes, of course, makes 
it possible for academic discourse to create abstractions - something that Martin 
and Rose (2003:103) refer to as 'ideational metaphor'. They explain metaphor in 
general as "a transference of meaning in which a lexical item that normally means 
one thing comes to mean another" (Martin & Rose, 2003:103). For them, ideational 
metaphor involves a transference of meaning from one kind of element (in this 
case a grammatical element) to another. The example they provide clearly illus- 
trates this shift in meaning, where a process such as marrying can also be treated 
as a quality, married, as well as a thing- marriage. These authors further explain 
that in modern written languages, the shift in meaning accomplished when us- 
ing a strategy such as nominalisation expands the set of meanings available to 
writers. In essence, the creation of an abstraction that is achieved through nomi- 
nalisation serves the purpose that is central to theorising, conceptualisation and 
argumentation in academic writing, viz. distinction-making. 

What is further evident is that a high degree of nominalisation is one of the 
features of academic writing that, because it makes the language more complex 
(through a strategy of concision), it also renders it less readable (and, therefore, 
accessible), espedally to those who do not yet form part of the academic discourse 
community. Although students new to this environment might have had some 
limited exposure to information-dense academic texts, this is one of the obstacles 
that denies many students, especially additional language users, access to the 
tertiary environment. It might also be interesting to note that again, an important 
feature of academic writing - in this case its inf ormation density - seems to neg- 
atively affect another feature, its clarity, with regard to how students new to this 
environment struggle to unlock the meaning in such texts. Relevant support to 
enable students to unlock such texts productively seems unavoidable if many 
new students are to succeed with their studies in this environment. 

2.5 Grammatical correctness 

Grammatical correctness of academic texts is supposed to be non-negotiable in 
the academic world. Student writing, however, often appears to be riddled with 
grammatical errors. The question should be asked then why very little evidence 
exists to suggest that lecturers from disciplines other than language pay any at- 
tention to grammar when they mark undergraduate student scripts, or when 
they do, they do so in a highly selective way, focusing on one or two grammatical 
features (e.g. tense, concord) only. These lecturers in some way still seem to un- 
derstand student writing, which indicates that the communicative requirement, 
viz. conveying the appropriate information, in this case from student to lecturer, 
is being met. This issue could probably also be connected to that of coherence in 
student writing discussed under the next point. This feature may further be linked 
to the conditions of authority and clarity of meaning. It would be difficult to be 
taken seriously if one's written text is riddled with errors. 


10 


Ensovoort: )aargang 1 1, nommer 2, 2007 


2.6 Coherent and cohesive argvmentation 

Coherence and cohesion in academic writing are mostíy created by the purpose- 
ful use of connecting devices that highlight the flow of ideas and signal the 
writer's intentions regarding the specific relationships between such ideas. While 
cohesion usually involves sentential and ideational connection within the text, 
coherence refers to the overall organisation of text into a recognisable sequence 
(e.g. text development from the introduction to conclusion). Prosser and Webb 
(1993) refer to specific devices used to create a predictable text structure (such as 
presenting all the main sections of an essay in the introduction, thereby creating 
an expectation as to what will follow) as 'predictive scaffolding'. Proficient aca- 
demic writers make use of such devices in order to lead readers through a text, 
also showing awareness of the fact that academic readers will probably know the 
textual pattems of academic texts and therefore find it easier to understand texts 
organised in this manner. Formulated in ethnomethodological terms: competent 
academic writers (and the readers of their texts) have an orientation to something 
that we may term an argumentative schema or framework. Once this framework 
is activated, e.g. through the use of discourse markers, the text becomes more 
intelligible. We again have an instance here of how a factual feature of academic 
texts, in this case coherence, is determined by a norm or condition - the orienta- 
tion towards an argumentative framework. 

Given the number of complaints by lecturers about students producing inco- 
herent texts (especially at postgraduate level), one could ask whether this issue 
might not also be related to ways in which lecturers generally read and respond 
to student texts. Do lecturers read student scripts for fluent argumentation, or are 
assessment opportunities arranged in such a way that only fragmented chunks of 
knowledge are often required of leamers and therefore acknowledged by lectur- 
ers? If so, this is a clear example where students' overall literacy development is 
neglected by lecturers in their undergraduate years and when supervisors re- 
quire language fluency and correctness on a postgraduate level, they suffer the 
consequences of such neglect. 

2. 7 Appropriate use ofevidence 

Academic writingshows certain conventions with regard to how the ideas/words 
of authorities (other sources) are acknowledged. Although different referencing 
systems are used across the world, what is shared by academic writing (in a 
western context) is that other people's ideas should be overtíy acknowledged in 
one's own academic writing. It is interesting that the notion of writing and ideas 
as the individual's 'property' is not always shared by all cultures, espedally where, 
historically, the development of ideas and knowledge has taken a different route. 
In China, for example, a learned person is often recognised as someone who can 
memorise information very well, espedally regarding texts that classical authors 


Ensovoort: jaargang 1 1, nommer 2, 2007 


1 1 


wrote. As a consequence, such texts become part of the person's memory and are 
supposed to be recognised by other leamed people without it being necessary for 
anyone to state explicitly that the words were initially spoken or written by 
somebody else. 

Another interesting perspective on the issue of plagiarism is that of Angelil- 
Carter (2000) who notes that neophyte writers may be making use of sources as 
models for meeting specific written conventions and norms of the academic dis- 
course community. Although they may not necessarily want to copy the ideas of 
a source, they may want to copy the way in which language is used by the source. 
So, while violating one of the most important conditions of academic writing on 
the one hand, they might be striving to meet another, that of the appropriate use 
of language, on the other. In this case, a degree of flexibility is called for in under- 
standing the predicament of writers new to this environment and that this kind 
of copying might form part of their process of becoming more proficient academ- 
ic writers. 

What lecturers require, however, is that references should be purposefully 
integrated into the text in support of the writer's argument, and not just be a 
collection of quotes without relationship or interpretation. Again, the idea of aca- 
demic writing being framed by the notion of a structured argument is evident: 
references are used to support one's argument. Similarly, the concept of authority 
comes to the fore: in order to enhance the authority of one's own academic text, 
one supports it with reference to that of an already acknowledged authority. Ulti- 
mately, the rhetorical purpose of arguing with authority (in specific genres) may 
be the most important condition of academic writing. Other genres may have 
another rhetorical purpose, e.g. the laboratory report that provides an account of 
a scientific procedure, providing specific information in a predetermined format. 


3. Conclusion 

I have tried to emphasise in this article the notion that similar to any other type of 
discourse, academic discourse cannot be divorced from its social context. Aca- 
demic discourse is further not a homogeneous entity, but varies considerably 
across and even within disciplines in the tertiary academic environment. This 
variability is a crucial feature of academic discourse that should inform the de- 
sign of writing courses in university education. Nonetheless, certain key norma- 
tive features of academic discourse can be identified, and one can identify, also, 
various typical features of academic texts that are regulated by such normative 
conditions, and that are in complex interaction with one another. 

Thus, the true worth of extremities in theorising (such as extreme versions of 
critical literacy) is that they urge one to revisit critically some relatively estab- 
lished, traditional notions on what constitute the features of academic discourse 


12 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


(with specific reference to academic writing). Therefore, in the case of extreme 
versions of critical literacy, they accomplish exactly what they set out to do, viz. to 
entice critical enquiry. It is important then that extreme versions of critical litera- 
cy not be discarded immediately as 'too radical' but be carefully assessed with 
regard to focusing critical enquiry. 

I strongly believe that identifying a core of traditional (functional) character- 
istics of academic discourse serves as a necessary foundation for initiating discus- 
sions with various role players in specific disciplines towards relevant and re- 
sponsible writing course design for students in such disciplines. Such issues 
could then be discussed and negotiated with such persons in order to ascertain 
their specific needs and to ensure as far as possible that these issues are contextu- 
alised within such disciplines. 


Bibliography 

Angelil-Carter, S. 2000. Understanding plagiarism differently. In: Leibowitz, B. & Mohamed, Y. 
(eds.). Routes towritingin SouthemAfrica. 2000: 154—177. 

Biber, D., Johansson, S., Leech, G., Conrad, S. & Finegan, E. 1999. Longman grammar ofspoken 
and written English. Harlow: Pearson Education. 

Bizzell, R 1992. Academic discourse and critical consciousness. Pittsburgh: University of Pittsburgh 
Press. 

Blanton, L.L. 1998. Discourse, artefacts, and the Ozarks: Understanding academic literacy. In: 
Zamel, V.Z. & Spack, R. (eds.). Negotiating academic literacies:Teaching and learning across 
language and cultures. 1998: 219-236. 

Coffin, C., Curry, M.J., Goodman, S., Hewings, A., Lillis, T.M. & Swann, J. 2003. Teaching 
academicwriting. New York: Routledge. 

Conrad, S.M. 1996. Investigating academic texts with corpus-based techniques: An example 
from Biology. Linguistics and Education, 8:299-326. 

Henning, E., Gravett, S. & Van Rensburg, W. 2002. Einding your way in academic writing. Pretoria: 
Van Schaik. 

Hyland, K. 2000. Disciplinary discourse. London: Longman. 

Ivanic, R. & Simpson, J. 1992. Who's who in academic writing. In: Fairclough, N. (ed.). Critical 
language awareness. New York: Longman. 

Martin, J.R. & Rose, D. 2003. Working with discourse: Meaning beyond the clause. Sydney. 

Prosser, M. & Webb, C. 1993. Relating the process of undergraduate essay writing to the 
finished product. Studies in Higher Education, March:l-30. 

Swales, J.M. & Feak, C.B. 1994. Academic writingfor graduate students. Ann Arbor: The University 
of Michigan Press. 

Ventola, E. 1998. Textlinguistics and academic writing. In: Allori, PE. (ed.). Academic discourse in 
Europe: Thought processes and linguistic realization. Rome: Bulzoni. 


Comprehension of pictures in educational materials 
for HIV/AiDS 

Adelia Carstens 
University of Pretoria 


Comprehension of pictures in educational materials for HIV/AIDS 

The article reports on a research project that was aimed at determining differ- 
ences in picture comprehension between literate and low-literate audiences in 
the context ofHIV and AIDS. Structured interviews were held with 30 low- 
literate and 25 literate adult speakers from African languages. The responses 
were coded, and analysed. Although metaphorical pictures proved to be proble- 
maticfor both literates and low-literates the low-literate group was more 
inclined towards literal interpretation ofpictorial metaphors, and culturally 
encoded meanings seemed to obfuscate the meaning of certain commonly used 
mass media symbols. Pictures containing symbolic-abstract components posed 
particular problems for the low-literates, and interpretation problems were 
compounded by poor legibility, complexity and an unclear figure-ground 
distinction. 


1. The problem: HIV/AIDS, literacy and health communication 

Effective communication is the backbone of health promotion and disease 
prevention. People need to understand health information to apply it to their 
own behaviour. Davis, Michael, Couch, Wills, Miller & Abdehou (1990: 533) 
regard comprehension as the most important of all the literacy skills used in 
health care. These authors found in their research in the United States that the 
average reading comprehension of public clinic patients was the 6 th grade 5* 
month, whereas most tested patient education materials required a reading 
level of the ll th to the 14 th grade. Forty percent of all public clinic patients 
tested were reading below a 5 th grade level and could be considered 'severely 
illiterate' (cf. also Plimpton and Root, 1994: 86). The South African situation is 
comparable. Basic instructional materials on health issues (including HIV and 
AIDS) have a readability level of just below 60, which is equivalent to Grade 9 
(Carstens & Snyman, 2003), while more than 70% of the South African popu- 
lation have only marginal reading skills: 30% are functionally illiterate and 
the other 40% have limited skills (Carstens, 2004; Project literacy, 2004). A com- 


14 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


pounding factor is that, as a rule, 30-50% of low-literate patients read 3-5 years 
below their educational level (cf. Davis et al, 1990: 535, 537). Moreover, the 
grade level reported by low-literate audiences is often adjusted upward by a 
few levels, presumably to save face. 

One may ask why formal education is so important in the realm of health. 
The answer lies in the fact that years of schooling completed is one of the 
most important socioeconomic correlates of good health in adult populations 
(Grosse & Auffrey, 1989: 281). The most poorly educated adults, those with 
the lowest literacy levels, suffer the highest rates of morbidity and mortality 
from chronic diseases and conditions (Rudd, Moeykens & Colton, 1999; 
Plimpton & Root, 1994; NWGLH, 1989). This correlation can be explained as 
follows: If people cannot understand the health care information available to 
them, they are unable to change potentially harmful behaviours, and improve 
their health. 

In developing countries such as South Africa, where almost two thirds of 
the population cannot read basic health education materials, the solution is 
often sought in visual media (Arbuckle, 2004). In health campaigns across the 
world pictures are used where the written word fails to communicate effec- 
tively - usually to supplement, extend or reinforce oral instructions (cf. Doak 
et al., 1996: 92; Mayeaux et al., 205; 207). Moreover, various studies report on 
the successes of using pictures in health education in developing countries 
(cf. Hoffmann, 2000; Linney, 1995; PATH, 2002; Plimpton & Root, 1994; Toma- 
selli & Tomaselli, 1982; Zimmermann, 1981). 

However, there are deficiencies in the research conducted thus far: 

• It has not been proven beyond doubt that people who cannot read well, 
will be able to comprehend and learn from visual communication (Doak 
et al, 1996: 92). 

• Almost all the studies on the interpretation of pictures in health educa- 
tion that have been undertaken, lack a purposeful theoretical orienta- 
tion and a sound theoretical basis (Hoffmann, 2000: 136). Many sources 
on problematic pictures in development contexts can at most be regarded 
as anecdotal accounts or hybrid lists of the difficulties observed (in- 
cluding a variety of semantic, syntactic, pragmatic, cognitive, cultural 
and stylistic, problems) (cf. Colle & Glass, 1986). 

• No comparative studies have been done to prove that there is a signifi- 
cant difference in the comprehension of certain types of pictures be- 
tween low-literate and literate audiences. 

The research reported on in this article was particularly aimed at establishing 

• whether purely analogical (representational) pictures are interpreted 
without difficulty by literate as well as low-literate audiences; 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


15 


• whether low-literate audiences experience problems interpreting sym- 
bolic-analogical pictures, whereas literate audiences experience fewer 
problems or no problems at all; 

• whether low-literate audiences experience problems interpreting sym- 
bolic-abstract pictures, whereas literate audiences experience fewer 
problems or no problems at all. 


2. Categories of pictures often reported as 'difficult' for low-literates 

The problems mentioned in the diverse literature on picture comprehension 

by low-literate audiences can be roughly categorised as 

• pictures with too much distracting detail in the background, causing 
the unskilled viewer to miss the central focus of the visual, or to focus 
on the wrong detail (Ausburn & Ausburn, 1983: 113; Doak et al., 1996: 
93; 103; Linney, 1995: 23; NCI, 1994; PATH, 2002: 2); 

• pictures that are misunderstood due to differences in the knowledge 
systems of the author/compiler/designer and the audience (Bradley, 1995: 
1; Cornwall, 1992; Doak et al, 1996: 99; Tomaselli & Tomaselli, 1984; Tripp- 
Reimer & Afifi, 1989: 613); 

• pictures reflecting western pictorial conventions of depth perspective, 
including linear perspective, occlusion, relative size, etc. (Arbuckle, 2004; 
Bradley, 1995: 74; Linney, 1995: 23-24; PATH, 2002: 2); 

• pictures representing unseen entities, e.g. 

— objects that are too small to be observed by the human eye, such 
as the HI virus; 

— objects that are concealed inside an outer 'shell', such as internal 
organs; 

— the operation of systems (Colle & Glass, 1986: 161; Dudley & Haa- 
land, 1993: 37); 

— depiction of movement, such as a physical object traveling from 
one point to another, emission of light, steam, breath, sound, etc., 
by making use of lines (Arbuckle, 2004; Colle & Glass, 1986: 161; 
Hoffmann, 2000: 142; PATH, 2002: 2) . 

• pictures rendered in unfamiliar art styles, e.g. cartoon-style drawings 
of people and objects, which both simplify and distort (Doak et al, 1996: 
95; Plimpton & Root, 1994: 86); 

• complex pictures of which the meaning is dependent on a particular 
sequence, e.g. left to right, or correct interpretation of a particular rela- 
tionship, e.g. causal, temporal, etc. (Colle & Glass, 1986: 161; Haaland, 
1984; Hoffmann, 2000: 95; 134; 142; Linney, 1995: 23-25); 


16 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


• pictures containing sjnnbols from popular media or non-familiar knowl- 
edge domains, e.g.: 

— metaphoric symbols, such as a heart (love), a dove (peace), etc. 
(Linney, 1995: 24; PATH, 2002: 2). (Although they are representa- 
tional, a transferred meaningis conveyed, which is often culture- 
dependent); 

— abstract symbols of which the meanings are not transparent, but 
have been fixed by convention, e.g. road signs, arrows, ticks, cross- 
es, mathematical symbols, circles and splashes of colour (to point 
out or indicate) (Colle & Glass, 1986: 161; Doak et al., 1996: 103; 
106; Kress & Van Leeuwen, 1996: 61-70; PATH, 2002: 2); as well as 
certain pictorial conventions used in the cartoon style, e.g. thought 
balloons and speech balloons (Colle & Glass, 1986: 16; Hoffmann, 
2000: 142; PATH 2002: 2; Hugo & Skibbe, 1991:49). 

In semiotics signs are often classified as either symbolic (where the relation- 
ship between the signifier and the signified is arbitrary) or analogical (where 
the relationship between the signifier and the signified is logical, natural or 
direct). In the case of pictures, 'analogical' normally also entails literal, and 
irrespective of the literacy level of the viewer, this type of picture is usually 
only misunderstood if the viewer is not familiar with the object portrayed. 

Following Gralki (1985), Hoffmann (2000: 8) distinguishes two types of pic- 
tures with a partially or completely symbolic (arbitrary) content, namely sym- 
bolic-analogical and symbolic-abstract. These types of pictures are not direct 
representations of people's reaLworld perceptions. In other words, they are 
not purely analogical. Some kind of cognitive transformation, based on aca- 
demic or cultural knowledge, is needed to connect them to their intended 
meanings. 

Symbolic-abstract representations are images that are purely fixed by con- 
vention. Hoffmann (2000: 85) characterizes them by saying that with these 
images "there is a constant tendency to cross the line into the field of written 
representation". Figures, formulae, tables, mathematical symbols (e.g. the con- 
ventionalized symbols for equation, addition, subtraction, multiplication, di- 
vision, etc.), logical notation (e.g. an arrow to signify entailment), etc., are ex- 
amples of signs that are closer to written signs than pictorial signs. Although 
not mentioned by Hoffmann, speech balloons and thought balloons may be 
categorised as symbolic-abstract as there is no direct, natural or logical resem- 
blance between form and meaning (apart from the fact that they are 'contain- 
ers' of embedded meaning), and since their meanings are only fixed by con- 
vention. Despite the fact that signets and logos may have originally been ana- 
logical Hoffmann categorizes them as symbolic-abstract once they have be- 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


17 


come the standard symbols for companies (e.g. the Mercedes star), products 
(the international wool mark) or organizations (e.g. the five Olympic rings, 
the International Red Cross). 

Symbolic-analogical pictures constitute a hybrid category between sym- 
bolic and analogical. These signs are interpreted symbolically in that that their 
use is fixed by convention, and analogically in that some structural or func- 
tional resemblance to the object represented is preserved in the meaning. This 
category includes 

• various types of diagrams which people use to convey abstract con- 
cepts, such as quantities, relationships or processes: a bar chart shows a 
comparison between separate entities of different sizes (containers); a 
line graph shows relationships between elements (evolution); 

• pictorial metaphors, such as a (schematic) representation of the heart to 
represent love; a representation of a spider's web to indicate a URL/ 
Internet site; 

• pictorial metonymies, in the case of which the picture represents an 
element or an attribute of an object, action, event or condition, e.g. a 
representation of a clock to represent time in general; a smiley face to 
represent happiness or friendship; a knife and fork to represent a res- 
taurant, a bed to represent overnight accornmodation (cf. Hoffmann, 
2000: 85). 

• Graphemes with fixed, conventional meanings that present unseen 
entities, e.g. lines to indicate movement, sound, light or heat, etc., are 
analogical in a certain way. Although these lines depict imaginary ob- 
jects or events, scientific knowledge (e.g. about light and sound waves, 
etc.) partially motivates them. 

Although the exposition given above suggests that the categories analogical/ 
literal, symbolic-analogical and symbolic-abstract are non-discrete, this art- 
icle will treat all non-literal pictures as either belonging to the category sym- 
bolic-analogical or symbolic-abstract. 


3. Research methodology 

The method of data-collection is primarily qualitative (structured interviews), 
but demonstrates elements of an experiment since the same interview sched- 
ule was used for two different respondent groups, and after coding the data, 
the two groups were compared, using statistical techniques. 


18 


Ensovoort: iaargang 1 1 , nommer 2, 2007 


3. 1 Respondents 

The experimental group comprised 30 low-literate speakers (persons with lit- 
eracy levels below 9 years of formal schooling) of 8 African languages between 
the ages of 22 and 55 years, and the control group comprised 25 literate (with 
literacy levels above 9 years of formal schooling) speakers of 8 African lan- 
guages aged between 23 and 49. 

The literacy levels of the respondents were determined on the basis of self- 
reports regarding years of formal schooling. Eight years of schooling was set 
as the upper limit for the experimental group, because it is regulated by South 
Airican law that learners who have passed Grade 9 may leave school and start 
tertiary training. A supporting motivation was that persons with fewer than 9 
years of schooling are regarded (in terms of the categories defined by Project 
Literacy 2004) as only 'marginally literate'. The lower limit for the control group 
was 9 years of schooling and the upper limit was students who have complet- 
ed a first degree. 

The sampling method was both convenient and purposive, as the data 
gatherers relied on personal acquaintances to identify and recruit respond- 
ents who satisfied the literacy requirements. All the respondents were inter- 
viewed individually. Low-literate respondents were interviewed in Constan- 
tia Park and Waterkloof Glen in Pretoria as well as in KwaMhlanga, and liter- 
ate respondents were interviewed in Pretoria Central, Menlo Park, Lynn- 
wood and Hatfield. 1 

The gender imbalance (25 females vs. 5 males in the experimental 
group; and 18 females vs. 7 males in the control group) should be attrib- 
uted to the sampling strategy (convenience). 

The following matrices summarize the socio-demographic profile 
of the respondents: 

I: Socio-demographic profile of the experimental group 

Years of formal Gender Occupation Home language 

education 


Female 25 

Domestic workers 16 

IsiZulu 9 

Male 5 

Unemployed 7 

IsiNdebele 7 


Gardeners 4 

Sepedi 7 


Cleaners 3 

Sesotho 3 

Setswana 1 

Siswati 1 



Xitsonga 1 

IsiXhosa 1 


The literate respondents were 
interviewed, and the data for 
the literate group was coded 
by Ms. L. Birir-Gangla. 


Table • 
Age 


Ensovoort: jaargang 1 1, nommer 2, 2007 


19 


Table 2: Socio-demographic profile of the control group 

Age Years of formal Gender Occupation Home language 

education 

Mean 32.8 Mean 12.6 Female 18 Domestic workers 5 Isizulu 4 

Male 7 Administrative workers 13 IsiNdebele 1 

Undergraduate students 7 Sesotho 1 
Sepedi 1 
Setswana 1 
Siswati 1 
Tshivenda 1 
Other 15 

3.2 Materials 

The materials comprised a compilation of fourteen pictures from various pub- 
lic information docurnents on f irV/AIDS that had been collected from educa- 
tional and public health care facilities (clinics, hospitals, schools) in and around 
Pretoria during the period 2000-2004. The pictures were scanned and arranged 
in a narrative sequence, which could be characterized as 'the story of AIDS'. A 
narrative sequence was chosen to introduce logic into the design, in the ab- 
sence of written text or oral narration. In the case of the low-literate respond- 
ents a pre-interview briefing was done to ensure that they were familiar with 
the main concepts regarding HTV and AIDS. 

The picture sequence included pictures on talking about sex and pregnan- 
cy; talking about sex and protection against HIV/AIDS; postponing sexual 
debut; HIV/AIDS and pregnancy; negotiating condom use; HlV-testing (and 
counselling); regular exercise; getting rest; healthy and unhealthy food choic- 
es (lunch); prohibition of smoking and alcohol use; and taking antiretroviral 
medicines according to schedule (compare Addendum A). 

Due to the availability of pictures on the different aspects of the AIDS nar- 
rative, and in order to test particular picture types, the portfolio included pic- 
tures rendered in different art styles: black and white semi-realistic line draw- 
ings, coloured, cartoon-style line-drawings; coloured, realistic line drawings 
with background detail; a silhouette; coloured, semi-realistic line drawings 
without background detail; shaded colour drawings without background de- 
tail; shaded colour drawings with background detail. 

3.3 Data collection 

The data were collected via structured interviews. The general procedure was 
to start each interview by introducing the interviewer, and asking about the 
preferred language for the interview. Interviewees were informed that their 


20 


Ensovoort: iaargang 1 1 , nommer 2, 2007 


responses would be tape-recorded anonymously, that their participation was 
voluntary, and that they were entitled to withdraw their participation at any 
stage during the research process. Each respondent was asked verbally for 
his/her consent to use the data, and to proceed with the interview. 

After the introduction, the respondents were briefed on the purpose of the 
research, namely to assist the researcher in finding out which of the pictures 
should be included in health education materials distributed at clinics. 

It was assumed that low literate participants would have less general know- 
ledge about preventing and living with AIDS than literate participants. There- 
fore, in the low-literate condition the structured interviews were preceded by 
a semi-structured briefing (without visual support) on the topic of HIV/AIDS, 
according to a schedule covering sexual debut, prevention of HIV/AIDS, and 
coping with the illness. The researcher asked questions, confirmed correct 
answers, and provided correct information where the respondent did not know 
the answer or held erroneous beliefs, in order to create sufficient contextual 
knowledge for the interpretation of the pictures. Respondents were invited to 
ask questions, and to comment on any of the issues raised. In the literate con- 
dition, this phase was skipped. 

Subsequently, socio-demographic information was recorded (age, occupa- 
tion, years of formal schooling, mother-tongue), followed by the actual experi- 
ment. In both conditions the interviewer presented the respondent with the 
pictures one by one and asked questions in a semi-structured fashion about (i) 
the recognition of the objects, (ii) the relationship between the objects and (iii) 
the message of the visual. Respondents were invited to comment on particul- 
ar aspects of the visual if they had not referred to them in the initial response. 

The first ten low-literate respondents were interviewed in the township of 
KwaMhlanga. The interviews were conducted at the house of one of the par- 
ticipants, Ms Elsie Mahlangu. The pre-interview briefing on HIV/AIDS took 
the format of a group discussion. Since the mother tongue of all the attendees 
was IsiNdebele, but the preferred language was English, both languages were 
used, with interpretation between them by a fluent speaker of both. The actu- 
al interviews took place individually. After the first two respondents, the in- 
terviewer realised that the respondents were much more fluent in Afrikaans 
than in English. Therefore, all the other interviews took place in Afrikaans, as 
it saved time, and produced direct and reliable answers. The assistance of the 
interpreter was sought whenever a word or phrase in either Afrikaans or 
IsiNdebele was not understood by one of the participants. 

The other twenty low-literate respondents were interviewed in Constan- 
tia Park and Waterkloof Glen. The interviews took place by prior arrangement 
with the home owner, and were conducted individually. Pre-interview brief- 
ings and interviews were done individually. All the interviews were conduct- 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


21 


ed in Afrikaans, the lingua franca in the eastern suburbs of Pretoria. All the 
interviews were recorded, transcribed and translated into English. 

The interviews with the twenty five literate respondents were conducted 
individually, using English as a medium of communication. 

3.4 Coding and analysis ofdata 

After conducting a session of interviews the responses were typed on data 
sheets. A code book was devised on the basis of the research questions and 
the specific characteristics of each picture included in the testing instrument. 
The following categories of information were included in the code book: 

• column number , 1-30 (low-literates); 1-25 (literates) 

• column label, e.g. visl l, visl_2, etc. 

• column descriptor, e.g. in the case of visl_5 - Able to identify shape as 
a thought balloon' 

• value, e.g. 1, 2 

• value label, e.g. 1 = 'yes' (if the respondent gave an appropriate inter- 
pretation of the pictorial symbol); and 2 = 'no' (if the respondent failed 
to give an appropriate interpretation of the pictorial symbol). 

The variables (column labels) and values were read into version 13.0 of the 
statistical analysis program SPSS, and the data was coded in order to calculate 
frequencies and averages. 

The database comprised 70 variables or questions (represented by the col- 
umn labels), of which 6 covered the socio-demographic details of the respond- 
ents. The remaining 64 questions were operationalisations of the research 
questions. The following dimensions of the respondents' picture comprehen- 
sion performance were covered: 

• Literal (analogical) recognition of people and objects: pictures 1 through 
10 (control questions) 

• Interpretation of symbolic-analogical aspects of pictures 

— pictorial metaphors (pictures 4 and 5) 

• Interpretation of symbolic-abstract aspects of pictures 

— abstract symbols (pictures 1, 3, 5, 8a, 8b, 9a, 9b) 

— symbolic-abstract cartoon-style conventions: thought and speech 
balloons (pictures 1 through 5) 


22 


Ensovoort: iaargang 1 1, nommer 2, 2007 


4. Discussion of findings 

4. 1 InterpretJtion oí anjlogical pictures 

Both literate and low-literate respondents had little difficulty in recognizing 
analogical pictures with a literal meaning (irrespective of art style): 89% of the 
low-literate respondents and 82% of the literate respondents identified the 
people/objects in the pictures correctly. Compare the distribution in the fol- 
lowing table: 


Table 3: Literal (purely analogical) recognition of objects 



Low-literates 

No. of correct 

interpretations 

out of 30 

% 

Literates 

No. of correct 
interpretations 

out of 25 

% 

Picture 1: Recognises female adult and child 
in main picture 

24 

80 

23 

92 

Picture 1: Recognises man, woman and baby 
in thought balloon 

22 

73.3 

22 

88 

Picture 2: Recognises the male adult and boy 

24 

80 

23 

92 

Picture 2: Recognises the condom in the 
speech balloon 

26 

86.7 

23 

92 

Picture 3: Recognises young male and female 

27 

90 

23 

92 

Picture 3: Recognises the bed in the thought balloon 

29 

96.7 

25 

100 

Picture 4: Recognises young male and female 

30 

100 

24 

96 

Picture 4: Recognises condom in speech balloon 

28 

93.3 

15 

60 

Picture 5: Recognises the adult male and female 

30 

100 

21 

84 

Picture 6: Recognises the syringe 

30 

100 

20 

80 

Picture 6: Recognises the test tube with blood 

11 

36.7 

12 

48 

Picture 6: Recognises the male and female 

28 

93.3 

20 

80 

Picture 7a: Recognises the person as a male playing 
with a ball/playing soccer 

28 

93.3 

22 

88 

Picture 7b: Describes the person as a female who 
is sitting on a couch, reading a book 

29 

96.7 

24 

96 

Picture 8a: Recognises the foods 

30 

100 

19 

76 

Picture 8b: Recognises the foods 

29 

96.7 

18 

72 

Picture 9a: Recognises the bottle of liquor 

28 

93.3 

22 

88 

Picture 9b: Recognises the ash tray and cigarette 

30 

100 

13 

52 

Picture 10: Recognises the young female 

30 

100 

19 

76 

Picture 10: Recognises pills/medicine 

23 

76.7 

22 

88 


Averages 


89.3 


82.0 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


23 


The only object not recognised or incorrectly identified by both low-literates 
(37%) and literates (48%) was the test tube with blood in picture 6a. In the 
low-literate group, 9 respondents did not recognize the test tube with blood 
at all. Other responses included 'pills' (4), 'a man's thing' (2), 'vaccine' (1), 'the 
things you put on your lips'(l), and 'a condom' (1). The poor recognition can 
probably be attributed to a lack of familiarity with the object. Few of the re- 
spondents might have gone for HlV-testing, or have seen the type of contain- 
er in which blood is transported to testing laboratories. 

4.2 InterpretJtion ofpictures with symbolic-abstract elements 
Two types of symbolic-abstract pictures were reasonably well represented in 
the corpus, namely callouts (speech and thought balloons) containing pic- 
tures representing what is said or thought by the characters in the main pic- 
tures, and abstract symbols. 

The following criteria were applied to establish how well speech and 
thought balloons were understood: 

• correct interpretation of the relationship between the contents of the 
callout and the rest of the picture, demonstrated by using either a verb 
of saying (say, talk, discuss, teach, tell, instruct, etc.) or a verb or cogni- 
tion (think); 

• namirig the shape correctly, by using words such as speech/thought bal- 
loon, or callout. 

Tables 4 and 5 summarize the responses relating to these criteria. On average 
both literate and low-literate respondents scored low on understanding the 
relationship between a speech or a thought balloon and the persons whose 
speech or thoughts are captured in these shapes. It could be assumed that 
second language speakers of English would not know the appropriate terms 
for these shapes. However, the researcher was surprised that such a large per- 
centage of literate respondents was not acquainted with the pictorial conven- 
tion itself. Less than 40% of them gave appropriate explanations for the rela- 
tionships between the callouts and the main pictures. The fact that 67% of the 
low literates and 80% of the literates seemed to understand the relationship 
between the speech balloon and the two characters (the man and the boy) in 
picture 2, could perhaps be attributed to their interpretation of other visual 
cues, namely the adult male's parted lips. 

Table 6 gives an overvíew of the interpretation of abstract symbols by the 
two groups: 


24 


Ensovoort: iaargang 1 1 , nommer 2, 2007 


Table 4: Correct interpretation of speech and thought balloons 



Low-literates 

No. of correct 

interpretations 

out of 30 

% 

Literates 

No. of correct 

interpretations 
out of 25 

% 

Picture 1: The respondent uses a verb of cognition 
such as thinks, understands, etc. to describe the re- 
lationship between main picture and thought balloon 

3 

10 

13 

52 

Picture 2: The respondent uses a verb of saying 
such as say, talk, telt, etc, to describe the relationship 
between main picture and speech balloon 

20 

66.7 

20 

80 

Picture 3: The respondent uses a verb of cognition 
to describe the relationship between main picture 
and content of thought balloon 

1 

3.3 

9 

24 

Picture 4: The respondent uses a verb of saying to 
describe the relationship between main picture 
and speech balloon 

11 

36.7 

6 

24 

Picture 5: The respondent uses a verb of cognition 
to describe the relationship between the main 
picture and the thought balloon 

4 

13.3 

2 

8 

Averages 


26 


37.6 


Table 5: Correct labelling of speech and thought 

Low-literates 

No. of correct 

interpretations 

out of 30 

balloons 

% 

Literates 

No. of correct 

interpretations 

out of 25 

% 

Picture 1: Able to label shape as a thought balloon 

1 

3.3 

0 

0 

Picture 2: Able to label the shape as a 

1 

3.3 

2 

8 

speech balloon 

Picture 3: Able to label the shape as 

0 

0 

1 

4 

a thought balloon 

Picture 4: Able to label shape as a speech balloon 

0 

0 

1 

4 

Picture 6: Able to label the shape as a 

0 

0 

2 

8 

thought balloon 


Averages 


1.3 


4.8 


Ensovoort: jaargang 1 1, nommer 2, 2007 


25 


Table 6: Correct interpretation of abstract symbols 



Low-literates 

No. of correct 

interpretations 

out of 30 

% 

Literates 

No. of correct 

interpretations 

out of 25 

% 

Picture 1: Interpret content of thought balloon 
correctly (if a man and a woman have intercourse 
they can have a baby) 

4 

13.3 

20 

80 

Picture 1: Able to name abstract symbols (+ and =) 

1 

3.3 

6 

24 

Picture 3: Interpret the content of the thought 
balloon appropriately 

15 

50 

21 

84 

Picture 3: Recognize the cross as a sign 
of prohibition 

9 

30 

22 

88 

Picture 5: Able to identify the question mark 
in the thought balloon 

5 

16.7 

6 

24 

Picture 5: Able to relate the question mark in the 
thought balloon to the main picture 

0 

0 

3 

12 

Picture 8a: Describe the food as not healthy 

18 

60 

24 

96 

Picture 8a: Indicate the red cross in support of 
previous answer 

13 

43.3 

18 

72 

Picture 8b: Indicate the green tick in support of 
the previous answer 

13 

43.3 

16 

64 

Picture 8b: Describe the food as healthy 

22 

73.3 

22 

88 

Picture 9a: Interpret the prohibition correctly 

27 

90 

23 

92 

Picture 9a: Indicate the red cross in support of 
previous answer 

25 

83.3 

22 

88 

Picture 9b: Interpretthe prohibition correctly 

27 

90 

24 

96 

Picture 9b: Indicate the red cross in support of 
previous answer 

24 

80 

22 

88 

Averages 


48.3 


71.1 


Table 6 shows that the low-literates scored 22% lower than the literates on the 
interpretation of pictures with symbolic-abstract elements. This is not surpris- 
ing: their lack of formal education did not facilitate the development of a vo- 
cabulary for abstract symbols, and also resulted in a limited development of 
formal (higher order) thought processes such as categorisation and critical 
reflection. 

Crosses and ticks were noticed and interpreted correctly by the low liter- 
ate group in those pictures where the symbols were emphasised through the 
use of colour, and legibility (size and weight). A red cross as a sign of prohibi- 
tion, superimposed on an object, was recognized by approximately 75% of 


26 


Ensovoort: jaargang 1 1, nommer 2, 2007 


the low-literate respondents (compare the responses for pictures 9a and 9b). 
However, where the abstract symbol was buried in the background, recogni- 
tion dropped by about a third. Less than half of the low-literates recognized 
the cross and the tick in pictures 8a and 8b. The complexity of the picture also 
seems to reduce the probability that an abstract-symbolic visual will be recog- 
nized and correctly interpreted by people with limited literacy skills. In pic- 
ture 3 only a third of the low-literate respondents identified and correctly 
interpreted the cross in the thought balloon. Cognitive load could have been 
decreased by introducing an additional cognitive-perceptual layer, namely 
colour. Since bright orange was used for headings in the booklet from which 
this particular picture was taken, spot colour could have been used for em- 
phasis in black and white line drawings. 

Mathematical symbols presented major problems for the low-literate group. 
Only one respondent was able to name the symbols + and = in the thought 
balloon of picture í, and to give an acceptable interpretation of the content. A 
confounding factor might have been that the symbols in the balloon are not 
used in their primary logico-mathematical senses. The symbol + is used as a 
synonym for unite rather than for the sum of, and the symbol = means "is the 
product of ", rather than "equals". Another possible reason why the + was not 
interpreted as a mathematical symbol, is that in the context of health educa- 
tion, this symbol is often used to represent health care facilities. Four of the 
low-literate respondents identified the + as a sign for a hospital or clinic. 

4.3 InterpretJtion o f symbolic-analogical pictures 

Mayeux et al. (1996) advise that "To be effective with patients whose literacy 
skills are low, patient education materials should be short and simple, contain 
culturally sensitive graphics and encourage desired behaviour." Culturally 
sensitive graphics would, for instance, include pictures portraying taboo ob- 
jects, and unfamiliar pictorial metaphors. Although only two culture-depend- 
ent metaphorical pictures were included in the testing materials, the respon- 
ses indicated that this kind of visual was problematic. Even the literate re- 
spondents (the majority of whom are speakers of African languages) had some 
difficulty in relating the pictorial metaphor to its intended meaning in picture 
5. Compare Table 7: 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


27 


Table 7: Correct interpretation of metaphor 



Low-literates 


Literates 



No. of correct 


No. of correct 



interpretations 


interpretations 



out of 30 

% 

out of 25 

% 

Picture 4: Associate hearts with love 

9 

30 

17 

68 

Picture 5: Recognize the red object in the thought 

9 

30 

4 

16 


balloon as a metaphor for illness 


Averages 30 42 


When including picture 4, the researcher did not anticipate that the hearts 
surrounding the boy's head would be ambiguous. It was taken for granted 
that the romantic meaning of red hearts had been popularized by mass media 
such as cartoons, greeting cards, etc. The responses of the low-literate group 
could be interpreted as a lack of exposure to print materials in general. Anoth- 
er possible explanation for the comprehension problems that occurred, is the 
influence of language-supported cultural meanings. When 5 low-literate re- 
spondents answered that the boy is 'thinking with the heart' (and when asked 
for an explanation they said he was worrying or ruminating), specialists of a 
nurnber of African languages were consulted to find out whether these lan- 
guages contained a semi-idiomatic expression that would be translated di- 
rectly as 'thinking with the heart', but meaning "to worry". According to a 
lexicographic practitioner who speaks several African languages, the Sepedi 
expression o bolela ka pelo, and the IsiZulu expression ukhuluma ngenhliziyo, can 
be literally translated as 'to talk with the heart', but actually means "to wor- 
ry". Although the number of respondents who assigned this meaning was 
relatively small, their responses indicated that metaphorical portrayals should 
never be used without pretesting them among members of the cultural group(s) 
of which the audience is part. 

Picture 5 was included because it was expected that the low-literate re- 
spondents may experience problems with interpreting the metaphor for AIDS. 
Astonishingly, not only the low-literates scored very low on comprehending 
the picture in the thought balloon; the literates performed worse: whereas 
30% of the low-literate respondents related the red monster to sickness (re- 
sponses included the terms 'sickness', 'germ', 'bacterium' and 'virus'), only 
16% of the literates gave appropriate answers. According to African language 
specialists the African languages do not have a unified way of metaphorising 
AIDS, and it would therefore be extremely difficult to use one single meta- 


28 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


phorical picture to symbolise the meaning of AIDS for speakers of all the Afri- 
can languages. 


5. Conclusion 

5.7 Summary of fíndings 

Consistent with previous research, it was found that purely analogical (literal; 
representative) pictures pose relatively few interpretation problems (on aver- 
age less than 18%). The fact that the low-literates' recognition of objects was 
slightly higher than that of the literates (by 7%) is probably due to the pre- 
interview briefing of the low-literates on the core concepts of HIV and AIDS. 

As far as the interpretation of symbolic-analogical (metaphorical) pictures 
by low-literates is concerned, two problems are worth mentioning: 

• overriding of (intended) popular mass media interpretations by cul- 
turally/linguistically encoded meanings (picture 4); 

• literal (analogical) interpretation of metaphorical pictures (picture 5). 

An unexpected finding regarding the literate group was that they scored low- 
er on recognising purely analogical pictures than the low-literate group. Only 
16% (as opposed to 30% of the low-literates) related the red monster in the 
thoughtballoon (picture 5) to HTV/AIDS, illness or a related concept. This could 
be due to the absence of a pre-interview briefing session, the style of the inter- 
viewer, and/or the time allowed by the interviewer to peruse each picture 
before responding. 

These findings suggest that in special-purpose communication, such as 
health education materials, it is dangerous to use metaphorical portrayals that 
have not been tested among members of the target populations, irrespective 
of literacy level. 

The research in connection with symbolic-abstract pictures confirmed that 
people with limited reading skills and limited exposure to written media ex- 
perience major problems interpreting abstract picture conventions such as 
speech balloons and thought balloons; and symbols from systems of formal 
logic, such as plusses, minuses, crosses and ticks. Interpretation problems are 
compounded where cognitive load is high, e.g. when the legibility is poor, 
when the symbols form part of a complex visual, or when they occur in the 
background of a picture. 

Contrary to expectations the literates' interpretation of the content of speech 
and thought balloons were only approximately 12% better than that of the 
low-literates, and they also performed poorly on labelling these shapes. They 
did, however, understand symbols with logical/mathematical elements signif- 
icantly better than the low-literates. They were also generally better equipped 


Ensovoort: iaargang 1 1 , nommer 2, 2007 


29 


to reflect on their responses, such as indicating a symbol (cross, tick, etc.) as 
the source of a particular answer (e.g. picture 3 and 8b). 

5.2 LimiUtions ofthe research 

Due to the qualitative (and therefore exploratory) nature of the initial design 
and the availability of materials, the portfolio of testing materials did not in- 
clude a sufficient number of pictures in each problematic category - for in- 
stanqe symbolic-analogical (metaphorical) pictures. The findings are therefore 
not conclusive, and need further testing before generalisations can be made. 

5.3 Suggestions to materials developers 

It has become clear that certain types of pictures are problematic across a wide 
spectrum of literacy levels. It is therefore suggested that simple, low-literacy 
materials with clear concrete graphics be considered for all patients, regard- 
less of their reading skills (cf. NWGLH, 1989). In the US patients in both high 
and low socio-economic and reading ability groups have indicated that they 
prefer short, simple and colourful materials (NWGLH, 1989). A study compar- 
ing a simplified sixth grade reading level brochure about polio (which com- 
bined simple text and line-drawings with captions to a 10 th grade reading level 
brochure) demonstrated that patients of all reading levels and all socioeco- 
nomic levels preferred the shorter arid simpler pamphlet. High-level readers 
understood both brochures equally well but took less time to read the shorter 
one. No one was offended by its simplicity. 

Designers of public health education materials in South Africa are advised 
to design these docurnents at a 5 th grade level (NWGLH, 1989), and include 
colourful, culturally relevant pictures as well as ample white space. Pictorial 
metaphors, art styles that distort objects, complex pictures with partially sym- 
bolic content, as well as abstract symbols from the written language should be 
omitted where possible. 


Bibliography 

Arbuckle, K. 2004. The language of pictures: Visual literacy and print materiak for Adult Basic 
Education and Training. Language Matters 35(2): 445-458. 

Ausbum, F. & Ausburn, J. 1983. Visual analysis skills among two populations in Papua New 
Guinea. Educational Communicatïon and Technobgy 31(2): 112-122. 

Bradley, S.M. 1995. Hozv people use pictures: An annotated bibliography. London: International In- 
stitute for Environment and Development and the British Cormcil. 

Carstens, A. 2004. Tailoring print materials to match literacy levels - a challenge for document 
designers and practitioners in adult literacy. Language Matters 35(2): 459-484. 

Carstens, A. & Snyman, M. 2003. How effective is the Department of Health's leaflet on HLVf 
AIDS Counselling for low-literate South Africans? TydskrifvirNederlands en Afrikaans (Journal 
for Dutch and Afrikaans) 10(1): 112-136. 

Colle, R. & Glass, S. 1986. Pictorial conventions in development communication in developing 
countries. Media in Education and Development. December 159-162. 


30 


Ensovoort: jaargang 1 1, nommer 2, 2007 


Comwall, A. 1992. Body mapping in health. RRA Notes 16: 69-76. International Institute of 
Environment and Development. London. 

Davis, T, Crouch, M.A., Wills, G., Miller S., & Abdehou, D.M. 1990. The gap between patient 
reading comprehension and the readability of patient educatíon materials. The Journal of 
Family Pradice 31(5): 533-538. 

Doak, C.C., Doak, L.G. & Root, J.H. 1996. Teaching patients with low literacy skills. Philadelphia: 
J.B. Lippincott. 

Dudley, E. & Haaland, A. 1993. Communicating Buildingfor Safety. Cambridge: Cambridge Ar- 
chitectural Research. 

Edwards, D.N. (commissioned by the Department of Health) 2000. Ubungani: A parent guidefor 
life skills, sexuality and HIV/AIDS Education. Pretoria: Government Printer. 

Fuglesang, A. 1970. Picture style preference. OVAC Bulletin 22: 46-48. 

Gauteng Provincial Department. 1999. HIV/AIDS Workplace Programme. Human Performance 
Systems. 

Gralki, H.0. 1985. Framework of visualizatíon. In: Ulrich, G. & Krappitz, U. 1985. Participatory 
approachesfor cocrperative group events - introdudion and examples of application. Feldafing: DSE. 
61-64. 

Grosse, N. & Auffrey, C. 1989. Literacy and health status in developing countries. Annual Re- 
view of Public Health 10: 281-297. 

Haaland, A. 1984. Pretesting communication materials. With special emphasis on child health 
and nutrition educatíon. A manual for trainers and supervisors. UNICEF. 

Hoffmann, V. 2000. Picture supported communication in Africa. Weikersheim, Margraf Verlag. 

Kress, G. & Van Leeuwen, T 2000. Reading Images. The GrammarofVisual Design. London, Routledge. 

Linney, B. 1995. Pictures, people and power. Hong Kong: MacMillan Educatíon. 

Mayeaux, J.R., Murphy, PW, Arnold, C.A., Davis, TC., Jackson, R.H. & Sentell, T 1996. Improv- 
ing patíent educatíon with low literacy skills. American Family Physician, January: 205-211. 

Messaris, P 1994. Visual Literacy. Image, Mind &Reality. Colorado: Westview Press. 

Moore, M.B., Sorensen, M. & Adebajo, C.F.1990. Illustrated print materials for health and family 
planning educatíon. World Health Forum 11: 302-307. 

National AIDS Programme. (s.a.) Pregnancy and AIDS. 

NCI (Natíonal Cancer Instítute). 1994. Clear & simple. Developing effective print materialsfor low- 
literate readers. [O] Available at: http://oc.nci.NCI.gov/services/Clear_and_Simple/HOME. 
HTM. Accessed on 15/10/2003. 

NWGLH (National Work Group on Literacy and Health). 1989. Communicatíon with patíents 
who have limited literacy skills. Journal ofFamily Pradice 46(2):168-176. 

PATH (Program for Appropriate Technology in Health). 2002. Developing materials on HIV/AIDS/ 
STIsfor low-literate audiences. [O] Available at: http://www.fhi.org/en/HrVAIDS/Publications/ 
manualsguidebooks/lowliteracyquide.htm. Accessed on 20/2/2004. 

Plimpton, S. & Root, J. 1994. Materials and strategies that work in low literacy health commu- 
nication. Public Health Reports 109(1): 86-92. 

Project Literacy. 2004. Education Statistics. [O] Available at: http://www.projectliteracy.org.za/ 
education.htm. Accessed on 22/09/2004. 

Rudd, R.E., Moeykens, B.A., & Colton, T.C. 1999. Health and Literacy: A Review of Medical and 
Public Health Literature. Annual Review ofAdult Learningand Literacy. New York: Jossey-Bass. 
[O] Available at: www.cete.org/acve/docs/pab00016.pdf. Aecessed on 20/2/2004. 

Shaw, B. 1969. Visual symbols survey. report on the recognition ofdrawing in Kenya. London: Centre 
for Educational Development Overseas. 

SoulCity/Khomamani. 2004. HIV and AIDS treatment. 

Spain, S. 1987. Improving visual comprehension in nonliterates. MediaAsia 14(2): 89-112. 

Tshwane City Council. (s.a.) The Healthy Diet Plan. Premos Graphics. 

Tomaselli, K.G. & Tomaselli, R. 1984. Media graphics as an interventionist strategy. Information 
Design Journal 4(2): 99-117. 

Tripp-Reimer, T. & Afifi, L.A.1989. Cross-cultural perspectíves on patíent teaching. Nursing Clin - 
ics ofNorth America 24(3): 613-619. 


Ensovoort: jaargang 1 1, nommer 2, 2007 


31 





6a (SoulCity/Khomanani 2004) 



6b (SoulCity/Khomanani 2004) 


32 


Ensovoort: iaargang 1 1, nommer 2, 2007 



7a (Gauteng Provincial Govemment 1999) 



8a (Tshwane City Council s.a.) 



.*=» 


9a (SoulCity/Khomanani 2004) 



7b (Gauteng Provincial Government 1999) 



8b (Tshwane City Council s.a.) 



9b (SoulCity/Khomanani 2004) 



10 (SoulCity/Khomanani 2004) 



The assessment of entry-level students' academic literacy: 
does it matter? 

Alan Cliff and Kutlwano Ramaboa 
University of Cape Town 
Carol Pearce 

Cape Peninsula University of Technology 


The assessment of entry-level students' academic literacy: does it matter? 

In Higher Education both nationally and internationally, the need to assess 
incoming students' readiness to cope with the typical reading and writing 
demands they willface in the language-of-instruction of their desired place of 
study is (almost) common cause. This readiness to cope with reading and 
writmg demands in a generic sense is at the heart ofwhat is meant by notions 
of academic literacy. 'Academic literacy' suggests, at least, that entry-level 
students possess some basic understanding of- or capacity to acquire an 
understanding of- what it means to readfor meaning and argument; to pay 
attention to the structure and organisation of text; to be active and critical 
readers; and toformulate written responses to academic tasks that are charac- 
terised by logical organisation, coherence and precision of expression. This 
paper attempts to address two crucial questions in the assessment of students' 
academic literacy: (1) Does such an assessment matter, i.e. does understanding 
students’ academic literacy levels have consequence for teaching and learning, 
andfor the academic performance of students, in Higher Education? (2) Do 
generic levels of academic literacy in the sense described above relate to academ- 
ic performance in discipline-specific contexts? Attempts to address these two 
questions draw on comparative data based on an assessment of students’ 
academic literacy and subsequent academic performance across two disciplines 
at the University of Cape Town and the Cape Peninsula University of Technolo- 
gy. Quantitative analyses illustrate relationships between students' academic 
literacy levels and the impacts these have on academic performance. Conclu- 
sions to the paper attempt a critical assessment ofwhat the analyses tell us 
about students' levels of academic literacy; what these levels ofliteracy might 
meanfor students and their teachers; and what the strengths and limitations of 
assessing academic literacy using a generic test might be. 


34 


Ensovoort: jaargang 1 1, nommer 2, 2007 


1 . Introduction 

Internationally and nationally, there is a substantial and arguably growing 
interest in the importance of assessing applicants seeking study places in High- 
er Education using multiple rather than single assessment criteria (see, for 
example, Arce-Ferrer & Castillo, 2006; Clemans, Lunneborg & Raju, 2004; Clif- 
fordson, 2006; Houston, Knox & Rimmer, 2007; Shivpuri, Schmitt, Oswald & 
Kim, 2006; Stricker, 2004). It would appear that this interest is driven by a 
number of factors or forces that seem universal in Higher Education contexts: 
(1) a growing concern internationally that applicants appear increasingly poor- 
ly-prepared to cope with the generic academic reading, writing and thinking 
demands placed upon them on entry to Higher Education study; (2) a con- 
cern that the results of conventional school-leaving examinations are not nec- 
essarily providing interpretable understandings of the academic competence 
levels of incoming students; (3) international trends towards greater diversity 
of educational background and experience in student intakes - and a con- 
comitant need for Higher Education to have a common understanding of the 
differing academic levels of students from these diverse educational back- 
grounds; (4) a growing need for Higher Education to be responsive to the 
educational backgrounds of students in a learning and teaching sense, and 
for an assessment of academic 'needs' to be an important first step towards 
the placement of students in appropriate curricula according to their educa- 
tional background. 

Research studies of the kind mentioned above underscore an increasing 
focus on the use of what are variously called admissions tests, entrance tests 
or selections tests as a means of collecting information, about Higher Educa- 
tion applicants, that is complementary to conventional or traditional assess- 
ment measures such as school-leaving examinations. There is now more than 
anecdotal evidence that Higher Education institutions and admissions com- 
mittees or panels are taking seriously the need for responsible, ethical and 
equitable approaches to admissions decisions, and a parallel need to make 
use of the multiple sources of information collected about applicants for the 
placement of those eventually registered into appropriate curricula. Further- 
more, there is a clear need to assess the outcomes of the use of multiple selec- 
tions criteria on the academic progression of students thus selected. 

The use of admissions tests such as the Scholastic Aptitude Test (SAT) (ETS, 
2007) or the Graduate Management Admission Test (GMAT) (GMAC, 2007), or 
assessments of language proficiency such as the International English Lan- 
guage Testing System (IELTS) (British Council, 2007) or the Test of English as a 
Foreign Language (TOEFL) (ETS, 2007), have become common cause in as- 
sessing applicants' readiness for Higher Education. These tests are implement- 
ed principally because it is believed that they will yield information about 


Ensovoort: jaargang 1 1, nommer 2, 2007 


35 


applicants' abilities to cope with the typical reading, writing and thinking 
demands they will likely face in Higher Education or that they will indicate 
the extent to which applicants will be able to cope with the language demands 
placed upon them in a particular medium-of-instruction. 

One form of complementary assessment that has become commonly used 
in the South African Higher Education landscape, to assess students' readi- 
ness to cope with typical language and learning demands, is an assessment of 
academic literacy. The Placement Test in English for Educational Purposes 
(PTEEP) (AARI) 2007), for example, is a test that is used at the pre-admissions 
stage to assess applicants' responsiveness to progressively more demanding 
reading and writing tasks. Based on the levels of responsiveness applicants 
demonstrate in this assessment - when compared with other applicants from 
similar educational backgrounds - these applicants are regarded as 'recom- 
mendable' or 'not recommendable' for particular forms of Higher Education 
provision (for example, conventional/mainstream or foundational provision). 
Scores on the PTEEP are used as complementary measures to other standard 
or alternate assessments of applicants' readiness for Higher Education. An- 
other example of an academic literacy assessment that is used in South Africa 
is the Standardised Assessment Test for Access and Placement (SATAP) (SATAR 
2007). This assessment has historically been used after applicants have been 
selected for Higher Education studies, as a means of assessing the extent to 
which these students might require assistance with coping with the academic 
literacy demands they are likely to face. A third and final example of an aca- 
demic literacy test is the Test of Academic Literacy Levels (TALL) (UAL, 2007), 
used post-selection by a number of Higher Education institutions to place 
students scoring below a certain score in courses that teach generic academic 
literacy in one or two media-of-instruction (Weideman, 2003; Van der Slik & 
Weideman, 2005). 

Before proceeding any further with the arguments in this paper, it seems 
necessary to explain what is meant by 'academic literacy' in this context. Aca- 
demic literacy in the sense delineated in this paper (cf. Bachman & Palmer, 
1996; Yeld, 2001; Cliff & Yeld, 2006; Weideman, 2006) means the extent to which 
students are able to: 

• make meaning from texts that they are likely to encounter in their stud- 
ies; 

• understand words and discourse signals in their contexts; 

• identify and track academic argument; 

• understand and evaluate the evidential basis of argument; 

• extrapolate and draw inferences and conclusions from what is stated or 
given; 


36 


Ensovoort: iaargang 1 1 , nommer 2, 2007 


• identify main from supporting ideas in the overall organisation of a text; 

• understand information presented visually (e.g. graphs, tables, flow- 
charts); 

• understand basic numerical concepts and information used in text, in- 
cluding basic numerical manipulations 

In essence then, academic literacy is necessarily a form of verbal reasoning 
that is crucially language-dependent. It follows that an assessment of academic 
literacy becomes an assessment of a student's verbal reasoning capacity in a 
particular language and this, in turn, implies that academic literacy is associ- 
ated with Higher Education medium-of-instruction. The assessment of aca- 
demic literacy is, therefore, not an assessment of language per se, but an as- 
sessment of the use of language as a vehicle for making meaning, making 
argument and understanding underlying point. 

The notion of academic literacy is conceived of from a different but related 
angle in student learning research. There, it is taken to refer to students' abil- 
ities to process information they read in a 'deep' sense, i.e. to understand the 
underlying point, structure or argument of what is read (Marton & Sáljó, 1976a 
& b) rather than to view what is read as consisting of isolated or discrete 'bits' 
of information. Theoretical frameworks drawn from research into how stu- 
dents learn in Higher Education have drawn a distinction between what are 
referred to as the 'deep' and 'surface' approach to learning (cf. Marton & Sáljó, 
1984). The former depicts learning - and, by extension, academic literacy - as 
a process of seeing how knowledge contributes to understanding, how un- 
derstanding contributes towards 'seeing 7 the world differently, and how 'see- 
ing' differently leads to the development of one's own point of view that is 
based on new understandings and insights. 'Surface' learning, on the other 
hand, is described as viewing knowledge as having a discrete, factual charac- 
ter, disconnected and disembodied from other knowledge and to be repro- 
duced in an untransformed form, usually to fulfil narrow assessment require- 
ments. It follows - from earlier discussion of the meaning of academic literacy 
- that 'deep' approaches to learning appear close to the reading, writing and 
thinking approaches described earlier, when the notion of academic literacy 
was explicated. 

Later formulations drawn from student learning research in Higher Edu- 
cation argued that 'deep' and 'surface' approaches to learning are influenced 
by the context in which students learn and by students' perceptions of that 
context; by how students conceive of what learning is; and by students' un- 
derlying forms of motivation for learning. These, in turn, influence how stu- 
dents approach typical reading, writing and thinking tasks in Higher Educa- 
tion. The terms 'meaning' and 'reproducing' orchestration (Meyer, 1991) were 


Ensovoort: jaargang 1 1, nommer 2, 2007 


37 


formulated to describe qualitative distinctions in the ways in which individu- 
al students 'orchestrated' their learning in accordance with their conceptions 
of learning, their perceptions of their learning contexts and their complex or 
less complex understandings of the purposes of Higher Education study. 

Whether one approaches notions of academic literacy as being about cop- 
ing with typical academic reading and writing demands or whether one views 
academic literacy as being related to 'deep' approaches or 'meaning 7 orches- 
trations towards learning, the commonality in the two approaches appears to 
lie in language. Students approach language to read texts in Higher Educa- 
tion; they use language to express their understandings of these texts or to 
produce viewpoints of their own; and they are exposed to or use language to 
engage with their own understandings of what learning is. Assessments of 
academic literacy, then, are assessments of responses to and production of 
language, albeit language in a specialised, applied form: the language of rea- 
soning, argument, exposition, explanation. 

As was argued earlier, a common assumption about assessing students' 
levels of academic literacy - whether prior to or on entry to Higher Education 
— is that important understandings about how these students will cope will be 
gained from this assessment process. A second assumption appears to be that 
assessments of generic forms of academic literacy are useful for understand- 
ing how students will cope in discipline-specific contexts. A third - perhaps 
less common - assumption is that assessments of academic literacy will lead 
to specific learning and teaching interventions to support those students who 
are deemed not yet ready to cope with the generic academic literacy demands 
placed upon them in Higher Education. 

This takes us back to the title of this paper: does the assessment of stu- 
dents' academic literacy matter, i.e. in this context, (1) are results on tests of 
academic literacy associated with subsequent academic performance of stu- 
dents across a range of disciplines and, if so, how; and (2) do results on tests of 
academic literacy provide useful information in suggesting teaching and learn- 
ing interventions necessary to improve students' levels of academic literacy, 
and with what consequence for student achievement? There is ready intui- 
tive and intellectual understanding amongst Higher Education academics that 
the characteristics of academic literacy as defined earlier in this paper are im- 
portant if students are going to become academically literate in the generic 
sense, but there is arguably less agreement about how important this generic 
academic literacy is in discipline-specific contexts. 

The studies described and analysed later in this paper represent an attempt 
to respond to questions raised in the previous paragraph by being focused on 
(1) the predictive validity of a generic test of academic literacy; and (2) the 
consequences for teaching and learning of data collected from this generic 


38 


Ensovoort: jaargang 1 1, nommer 2, 2007 


test. The studies are based on the use of the PTEEP (referred to earlier), and it 
is to an explication of the construct and psychometric properties of the PTEEP 
that this paper now turns. 


2. An academic literacy test: the Placement Test in English for Educational 
Purposes (PTEEP) 

The construct or 'blueprint' for the PTEEP is fully explicated in Yeld (2001) 
and a detailed explanation is not attempted here. The principal features of the 
approach to the development of the PTEEP are that it is: (1) a generic test, 
designed to provide complementary information to traditional achievement 
tests (such as the school-leavingexamination); (2) developed by national inter- 
disciplinary teams of expertise, to increase both its face and content validity; 
(3) relatively curriculum-independent, so as to downplay the role of prior ex- 
posure to knowledge; (4) designed to assess language as a vehicle for academ- 
ic study and reasoning rather than language per se; (5) developed according 
to a theme and a set of specifications, so as to ensure that engagement for the 
writers can be 'scaffolded', made progressively more complex, and be authen- 
tic to a Higher Education context (adapted from Cliff, Hanslo, Ramaboa & 
Visser, 2005). 

Table 1 shows the PTEEP construct operationalised in the form of a set of 
specifications that depict the reasoning approaches assessed in the test (Cliff, 
Yeld & Hanslo, 2003 - adapted from Yeld, 2001; Bachman & Palmer, 1996). 

As can be seen from Table 1, the construct of the PTEEP is conceptually 
constituted of nine sub-constructs that cover reasoning and meaning-making 
at a word, sentence, paragraph and argument level. An important feature of 
the PTEEP is its additional focus on visual and numerical literacy: these sub- 
constructs are included in the PTEEP because they contain special forms of 
language that are central components of most, if not all, academic programmes 
of instruction. 

The 2007 PTEEP has an overall Cronbach alpha reliability of 0.89 (typically, 
overall reliabilities for the test are between 0.85 and 0.92) - if the edit-type 
question is removed from the analysis, the alpha rises to 0.92. The Cronbach 
alpha is based on a sample of n = 2456 writers. 

Table 2 shows the coefficients of correlation amongst the sub-constructs of 
the 2007 PTEEE 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


39 


Table 1: PTEEP specifications 


Skill Assessed 

Explanation of Skill Area 

Vocabulary 

Students’ abilities to derive/work out word meanings from their context 

Metaphorical Expression 

Students’ abilities to understand and work with metaphor in language. 

This includes their capacity to perceive language connotation, word play, 
ambiguity, idiomatic expressions, and so on 

Extrapolation, application 
and inferencing 

Students’ capacities to draw conclusions and apply insights, either on the basis 
of what is stated in texts or is implied by these texts. 

Understanding the 
communicative function 

of sentences 

Students’ abilities to ’see’ how parts of sentences / discourse define other parts; 
or are examples of ideas; or are supports for arguments; or attempts to persuade 

Understanding relations 
between parts of text 

Students’ capacities to ’see’ the structure and organisation of discourse and 
argument, by paying attention - within and between paragraphs in text- to 
transitions in argument; superordinate and subordinate ideas; introductions 
and conclusions; logical development 

Understanding text genre 

Students’ abilities to perceive 'audience’ in text and purpose in writing, including 
an ability to understand text register (formality / informality) and 
tone (didactic/informative/persuasive/etc.) 

Separating the essential 
from the non-essential 

Students’ capacities to 'see’ main ideas and supporting detail; statements and 
examples; facts and opinions; propositions and their arguments; 
being able to classify, categorise and 'label’ 

Understanding information 
presented visually 

Students’ abilities to understand graphs, tables, diagrams, pictures, 
maps, flow-charts 

Understanding basic 
numerical concepts 

Students’ abilities to make numerical estimations; comparisons; calculate 
percentages and fractions; make chronological references and sequence 
events/processes; do basic computations 


The mostly moderate correlations between the PTEEP sub-constructs suggests 
there to be some empirical support for the conceptual sub-constructs as de- 
fined in Table 1. The sub-constructs for the most part seem to be assessing 
aspects of academic literacy that are at least partly discrete from one another, 
which seems justification for the separation of the construct into its sub-con- 
structs. Given the large sample size from which these data were drawn (n = 
2456) and the diversity of the writer pool in terms of demographic factors 
(such as school and linguistic background), correlations in Table 2 are argua- 
bly between the sub-constructs of the test rather than related to the homoge- 
neity of the writer pool. 

Two exceptions are apparent from Table 2: (1) the correlation of 0.97 be- 
tween the 'vocabulary' and 'discourse' sub-constructs suggests that writer 
performance in one is strongly associated with writer performance in the oth- 
er. This seems theoretically surprising, but can be explained by the fact that 
the questions assessing discourse indicators in the 2007 PTEEP in many cases 


40 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


Table 2: Correlations amongst PTEEP sub-constructs 



Vocab 

Metaph 

Infer 

Relat 

Senten 

Disc 

Genre 

Essent 

Visual 

Numeric 

Vocabulary 


0.56 

0.57 

0.38 

0.44 

0.97 

0.60 

0.74 

0.73 

0.73 

Metaphor 

0.56 


0.46 

0.30 

0.34 

0.56 

0.47 

0.55 

0.49 

0.49 

Inference 

0.57 

0.46 


0.32 

0.34 

0.57 

0.45 

0.55 

0.52 

0.52 

Relations 

0.38 

0.30 

0.32 


0.22 

0.38 

0.33 

0.39 

0.35 

0.35 

Sentences 

0.44 

0.34 

0.34 

0.22 


0.44 

0.35 

0.45 

0.40 

0.40 

Discourse 

0.97 

0.56 

0.57 

0.38 

0.44 


0.61 

0.74 

0.74 

0.74 

Genre 

0.60 

0.47 

0.45 

0.33 

0.35 

0.61 


0.58 

0.54 

0.54 

Essential 

0.74 

0.55 

0.55 

0.39 

0.45 

0.74 

0.58 


0.67 

0.67 

Visual 

0.73 

0.49 

0.52 

0.35 

0.40 

0.74 

0.54 

0.67 


1.00 

Numerical 

0.73 

0.49 

0.52 

0.35 

0.40 

0.74 

0.54 

0.67 

1.00 



Note: p < 0.05 in all cases 

The sub-constructs are defined in Table 1 . In Table 2, however, the 'relations’ sub-construct 
has been separated into two: ‘relations' and 'discourse’. 


asked writers to assess the meanings of words from academic word lists, for 
example, 'however'; 'nevertheless'; 'because'; and so on. The correlation of 
1.00 between the 'visual' and the 'numerical' sub-construct is not surprising, 
since writer performance on these two constructs was assessed by the same 
set of questions. 

Typically, the PTEEP consists of between 65 and 70 items/questions, divid- 
ed into the following question-types: multiple-choice questions; short-response 
questions; a flow-chart/concept map question; an edit-type question; and a 
one-page expository essay question. There are at least three texts for reading 
in the PTEEI^ all of which are related to the theme for that particular test. 

Table 3 depicts correlations amongst a number of the question-types in the 
2007 PTEEI^ as well as the correlations between these question-types and the 
total score of writers on the test. 


Table 3: Correlations amongst question-types on the PTEEP 



Total 

Short pieces 

Edit Question 

Essay 

Multi-choice 

Total 


0.95 

0.88 

0.84 

0.92 

Short pieces 

0.95 


0.93 

0.69 

0.83 

Edit Question 

0.88 

0.93 


0.64 

0.77 

Essay 

0.84 

0.69 

0.64 


0.66 

Multi-choice 

0.92 

0.83 

0.77 

0.66 



Note: p < 0.05 in all cases. 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


41 


The high correlations between various question types and the total score of 
writers on the test suggest that assessment using any one question type will 
suffice for determining the overall performance of writers. In particular, the 
multiple-choice questions on their own, or the short response pieces on their 
own, are very strongly correlated with the total score. The correlation of the 
short pieces to the total score is somewhat surprising, given that these pieces 
are assessed by different markers, but it is also encouraging evidence of stand- 
ardisation amongst these markers for this question-type. The more moderate 
correlations amongst different question-types on the test suggest that, although 
any one question-type might be useful for predicting overall writer perform- 
ance on the test, each of the question-types does yield somewhat discrete in- 
formation about writer performance - or that marker standardisation, whilst 
reasonable, has not yet reached completely desirable levels. 

In summarising this section of the paper - and to return to the topic of 
whether an assessment of academic literacy as measured by the PTEEP mat- 
ters - it would seem that there is justification for the division of the PTEEP 
construct into its sub-constructs, but it would also seem that there is some 
degree of overlap amongst the sub-constructs. This is not surprising, given 
that academic literacy would seem to be a complex construct the sub-con- 
structs of which cannot wholly be separated into constituent parts. 


3. Associations between PTEEP scores and academic performance 

In one very tangible sense, assessment of academic literacy might matter: if 
academic literacy can be shown to have associations with subsequent aca- 
demic performance in Higher Education. This section of the paper will 
deal with two approaches to explorations into associations between 
PTEEP and academic performance 1 . The first approach is a high-level 
(trend) exploration of the extent to which scores on the PTEEP have 
association with academic performance in two contrasting disciplinary 
contexts, viz. Engineering and Humanities. The second approach at a 
programme-specific level assesses the relations between PTEEP and a 
postgraduate Engineering studies context, and the value of the PTEEP 
and its construct for teaching and learning purposes. 

Figure 1 shows the associations between PTEEP scores (expressed as a rank- 
ing of students from decile 1 - top decile - to decile 10 - bottom dedle) and 
mean academic performance for the 2002 cohort of University of Cape Town 
Engineering students at the end of their first academic year of study. For eas- 
ier reporting, decile rankings have been grouped in pairs, and for examining 
trend-level associations, mean academic performance score has been computed 
as a simple average of academic performance over the courses taken by these 


1 Not all data for these 

explorations are included in 
this paper, for reasons of 
brevity. Full analyses are 
available for scrutiny from the 
first author. 


42 


Ensovoort: jaargang 1 1, nommer 2, 2007 


2002 Engineering studenfs in first-year 


100 

o 


90 



o 

o 


o 


Mainstream 



Figure 1: Associations between PTEEP scores and academic performance - 
2002 Engineering students in their first year of studies 


students. Note that the 2002 cohort of students has been further sub-divided 
into two gi'oups: those students who were registered for / mainstream , (con- 
ventional, standard curriculum) programmes and those registered for foun- 
dation (reduced or extended curriculum) programmes. 

From Figure 1, it can be seen that for mainstream students, PTEEP per- 
formance is associated with noticeable 'spreads' of scores in academic per- 
formance terms at the end v of first-year. The trend, though, for mainstream 
students is that higher decile ranking on PTEEP (particularly deciles 1 and 2) 
is associated with higher mean academic performance and lower numbers of 
students scoring below a 50% mean. Assessing academic literacy by means of 
the PTEEP does appear to rnatter in academic performance terms for main 
stream students at the end of first-year. 

For Foundation programme students, higher PTEEP scores are not as clearly 
related to higher academic performance scores as they are for mainstream 
students. There is still a tendency, though, for higher PTEEP scores to be asso- 
ciated with lower nurnbers of students scoring below a 50% mean for aca- 
demic performance. Assessing the academic literacy of Foundation programme 
students using the PTEEP does appear to matter in terms of lower PTEEP 
scores predicting the numbers of students falling below 50% mean, but mat- 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


43 


2002 Engineering students in second-year 




o 

8 

o 


o 


0 


8 

o 




Foundation 


Figure 2: Associations between PTEEP scores and academic performance - 
2002 Engineering students in their second year of studies 


ters less in terms of higher PTEEP scores relating to higher academic perform- 
ance than it does for mainstream students. 

Figure 2 shows associations between PTEEP scores and academic perform- 
ance for the 2002 intake of Engineering students in their second year of stud- 
ies. Essentially, the patterns of association are similar for mainstream and foun- 
dation programme students as they were for first-year performance. 

In a contrasting disciplinary context, i.e. Humanities, associations between 
PTEEP scores and mean academic performance produce patterns of the kind 
illustrated in Table 4. The Table shows associations between bands of PTEEP 
performance and mean academic performance at the end of first-year for two 
cohorts of Humanities students, viz. the 2004 and 2005 intakes. 'Bands' of per- 
formance refers to the grouping of PTEEP performance by deciles as indicat- 
ed in Table 4. 

From Table 4, it is clear that approximately 70% of Deciles 1-3 students in 
both years achieved mean second or first class pass scores and between ap- 
proximately 65% and 70% of the Deciles 8-10 students scored third class pass- 
es or failed. In Humanities, higher ranked PTEEP performance seems associ- 
ated with a higher level of pass; lower ranked PTEEP performance associated 


44 


Ensovoort: iaargang 1 1, nommer 2, 2007 


Table 4: Associations between PTEEP scores and academic performance - 
2004 and 2005 Humanities students in their first year of studies 



Year Intake (numbers of students in each category) 

2004 


2005 



Deciles 1-3 

Deciles 4-7 

Deciles 8-10 

Deciles 1-3 

Deciles 4-7 

Deciles 8-10 

Fail 

13 

9 

7 

19 

18 

9 

third class pass 

64 

86 

41 

81 

90 

62 

second class pass 

197 

92 

18 

198 

104 

35 

first class pass 

7 

2 

0 

19 

2 

1 

Total 

281 

189 

66 

317 

214 

107 


Year Intake (percentage of students in each category) 

2004 


2005 



Deciles 1-3 

Deciles 4-7 

Deciles 8-10 

Deciles 1-3 

Deciles 4-7 

Deciles 8-10 

Fail 

4.63% 

4.76% 

10.61% 

5.99% 

8.41% 

8.41% 

third class pass 

22.78% 

45.50% 

62.12% 

25.55% 

42.06% 

57.94% 

second class pass 

70.11% 

48.68% 

27.27% 

62.46% 

48.60% 

32.71% 

first class pass 

2.49% 

1.06% 

0.00% 

5.99% 

0.93% 

0.93% 

Total 

100% 

100% 

100% 

100% 

100% 

100% 


with a lower level of pass. Furthermore, the mean academic performance lev- 
els of the Deciles 1-3 students are statistically significantly higher than the 
mean academic performance levels of both of the other two groups. This sug- 
gests that in an environment of competition for academic places in this Facul- 
ty, the Deciles 1-3 students would be more likely to be academically successful 
(albeit that data is limited here to the first year of study). The two studies 
above represent investigations conducted using a trend level approach to as- 
sessing associations between PTEEP and subsequent academic performance. 

The third study described below represents an attempt to explore the im- 
pact of teaching and learning on student performance on the PTEEP The con- 
text for the study was a postgraduate Engineering course in Project Manage- 
ment, where students wrote the PTEEP at the commencement of their studies 
and again at the conclusion of their study programme. The principal aim of 
this process was to assess the extent to which students' academic literacy as- 
sessed by the PTEEP could be said to have altered or remained stable after a 
programme of study, i.e. did 'good' or 'poor' performance on the PTEEP re- 
main stable or improve at the second administration of the test? The second 
aim of this study was to explore the extent to which the PTEEP could be used 
to identify academic literacy strengths and weaknesses in a group of students, 
i.e. could performance on the test be used to guide teaching and learning? 


Ensovoort: ]aargang 1 1, nommer 2, 2007 


45 


Table 5: Comparison of mean PTEEP performance of postgraduate 
Engineering students on two separate occasions 



Mean PTEEP Score 

First Occasion 

Mean PTEEP Score 

Second Occasion 

Full cohort 

45.5% 

45.7% 

Sub-cohort who scored below 30% on the 

first occasion 

23.8% 

27% 

Sub-cohort who scored between 31% and 50% 

on the first occasion 

40.7% 

40.4% 

Sub-cohort who scored above 50% on the 

first occasion 

56% 

55% 


Table 5 shows the differences in mean PTEEP performance for the 2005 
cohort of postgraduate Engineering students who wrote the test on two occa- 
sions in the academic year. 

As will be noted from Table 5, mean PTEEP performance remained rela- 
tively stable from one test administration to the next - differences in mean 
PTEEP performance were not statistically significant. The only sub-group for 
whom differences (impróvements) in performance could be seen were the 
group whose PTEEP performance had been weakest on the first administra- 
tion occasion. Stabie rnean PTEEP performance for the sub-cohort who scored 
above 50% on the first occasion is arguably acceptable: these students per- 
formed creditably on the first occasion, and retained that level of perform- 
ance. Stability or minor improvement in the other two sub-cohorts is some- 
what worrying. The weakest sub-cohort did show some improvement in mean 
PTEEP performance (to 27%), but from a poor initial performance base. 

There may be a nurnber of possible explanations for the lack of improve- 
ment in PTEEP scores for the weaker sub-cohorts: (1) for students weak in 
academic literacy, one year is not sufficient to improve this academic literacy 
in a teaching context that is not explicitly designed to address academic liter- 
acy as defined in this paper; (2) student motivation to demonstrate improve- 
ment in a generic academic literacy test is low if these students can see no 
apparent relationship between what is assessed in this test and what is as- 
sessed in a discipline-specific context such as this postgraduate Engineering 
one. The most compelling explanation for the lack of improvement lies in the 
absence of explicit intervention of the academic literacy kind assessed by the 
PTEEP in the programme of teaching and learning these students were regis- 
tered for. Conventional coursework per se proved insufficient to change their 
scores on an academic literacy test. 

Particular approaches to academic reading, writing and thinking that ap- 
peared to be weakest for the group of students as a whole (data available from 


46 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


these authors) were: (1) metaphorical expression - students' capacity to un- 
derstand and use analogous, "pictorial" and non-literal language and reason- 
ing; (2) text genre - students' capacity to understand that writers have differ- 
ent "audiences" and purposes for writing, and that these influence what and 
how they write; (3) own voice - students' capacity to produce their own log- 
ical argument, structure this argument and use appropriate language in its 
formulation. However, these weaknesses in an academic literacy sense were 
not explïcitly addressed in the teaching programme. Course lecturers did not 
engage with the discipline-specific meanings and consequences of, for exam- 
ple, students' test weaknesses in analogous reasoning, text genre, or capacity 
to produce structured argument. Nor was course assessment in the postgrad- 
uate Engineering context explicitly related to the assessment embodied in the 
academic literacy test. So it may be that assessing students' academic literacy 
for learning improvement does not necessarily 'matter' - unless this assess- 
ment is tied to direct teaching interventions aimed at addressing weaknesses 
identified. 


4. Concluding discussion 

We return to the title of our paper and consider again whether an assessment 
of the academic literacy of entry-level Higher Education students matters. We 
have explored the notion of what is rneant by 'matters' at a number of levels 
in this paper. Firstly, we have considered the extent to which an assessment of 
generic academic literacy, such as the PTEEE is regarded by Higher Education 
academics as having validity, i.e. we have considered the face validity of the 
PTEEP, and have observed that the theoretical grounding of the construct of 
the PTEEP in international studies of language assessment and of student learn- 
ing helps to establish this validity. We have also noted that the participation 
by interdisciplinary national teams in the development and operationalisa- 
tion of the construct of the PTEEP further assists in establishing both face and, 
in so far as this is systematically considered and articulated, also content va- 
lidity. At an empirical level, we have reported on the reliability of the PTEEP 
and the coherence of the construct and its sub-constructs. We have argued 
that there appears to be some empirical support for the division of the con- 
struct into its constituent parts, but that there also appears to be some degree 
of overlap amongst the constituents. We have also presented evidence that 
some question-types on the test might of themselves be sufficient to assess 
students' academic literacy, but that there are grounds for arguing that read- 
ing-response type questions (multi-choice questions) assess different kinds of 
academic literacy to writing-response type questions (productive elements in 
the PTEEP). 


Ensovoort: jaargang 1 1, nommer 2, 2007 


47 


Secondly, we have assessed the extent to which assessments such as the 
PTEEP 'matter' in terms of their having associations with subsequent student 
academic performance. Large-scale studies of the kind described in Engineer- 
ing and Humanities contexts in this paper suggest that differing levels of per- 
formance on the PTEEP are associated with differing levels of academic per- 
formance across both mainstream and foundation programme provision. In 
the mainstream context, higher scores on the PTEEP appear to be associated 
with academic performance scores and lower scores on the PTEEP with lower 
academic performance. In the foundation programme context, lower scores 
on the PTEEP appear to have some association with lower scores academical- 
ly. Higher scores on the PTEEP are less associated with higher academic per- 
formance scores than they were for mainstream students, but are more likely 
to be predictive of success than failure for foundation programme students. 

Smaller-scale studies of the kind reported on in the postgraduate Engi- 
neering context, where explorations of a direct relationship between PTEEP 
and academic performance were attempted, provide no significant evidence 
that PTEEP scores improve after a period of academic study. At face-value, 
however, there would seem to be evidence of improvement in PTEEP per- 
formance for those students who performed poorly on the PTEEP at the first 
time of writing. We conclude that PTEEP performance may not 'matter' un- 
less it is explicitly addressed in the context of discipline-specific curricula and 
unless the academic literacy assessed in the PTEEP is integrated into the teach- 
ing, learning and assessment of the disciplinary programme. 


Bibliography 

AARE 2007. The Placement Test in English for Educational Purposes: The Tea Test. Alternative 
Admissions Research Project, University of Cape Town. Available at http://www.aarp. 
ac.za. Accessed on 20 November 2007. 

Arce-Ferrer, A.J. & CastiOo, I.B. 2006. Investigating postgraduate college admission inter- 
views: Generalisability theory, reliability and incremental predictive validity. Journal of 
Hispanic higher education 6 (2): 118—134. 

Bachman, L.F. & Palmer, A.S. 1996. Language testing in practice. Hong Kong: Oxford University 
Press. 

British Council. 2007. The Intemational Language Testing System (IELTS). United Kingdom: 
The British Coundl. 

Clemans, WV., Lunneborg, C.E. & Raju, N.S. 2004. Professor Paul Horst's legacy: A differen- 
tial prediction model for effective guidance in course selection. Educational measurement: 
Issues and practice 23 (3): 23—30. 

Cliff, A.F., Yeld, N. & Hanslo, M. 2003. Assessing the academic literacy skills of entry-level 
students, using the Placement Test in English for Educational Purposes (PTEEP). Bi- 
annual conference of the European Assodation for Research in Learning and Instruction 
(EARLI), Padova, Italy. 

Cliff, A., Hanslo, M., Ramaboa, K. & Visser, A. 2005. Third annual report to the Health 
Sciences Consortium on the use of Health Sdences Placement Tests. AARP Research 
Report, University of Cape Town. 


48 


Ensovoort: iaargang 1 1 , nommer 2, 2007 


Cliff, A.F. & Yeld, N. 2006. Test domains and constructs: academic literacy. In: Griesel, H. (ed.) 
Access and entry level benchmarks: The national benchmark tests projert. Pretoria: Higher 
Education South Africa: 19-27. 

Cliffordson, C. 2006. Selection effects on applications and admissions to Medical Education 
with regular and step-wise admission procedures. Scandinavian journal of educational 
research 50 (4): 463-482. 

ETS. 2007. The Scholastic Aptitude Test (SAT). Educational Testing Service. Princeton: USA. 

ETS. 2007. Test of English as a Foreign Language (TOEFL). Educational Testing Service. 
Princeton: USA. 

GMAC. 2007. The Graduate Management Admission Test (GMAT). Graduate Management 
Admission Council. Virginia: USA. 

Houston, M., Knox, H. & Rimmer, R. 2007. Wider access and progression among full-time 
students. Higher education 53: 107-146. 

Marton, F. & Saljo, R. 1976a. On qualitatíve differences in leaming: I - Outcome and process. 
British joumal of educational psychology 46: 4—11. 

Marton, F. & Sáljó, R. 1976b. On qualitatíve differences in leaming: II - Outcome as a functíon 
of the leamer's conception of the task. British journal of educational psychology 46: 115-127. 

Marton, F. & Sáljó, R. 1984. Approaches to learning, In: Marton, F., Hounsell, D. & Entwistle, 
N. J. (eds.) The experience oflearning. Edinburgh: Scottish Academic Press: 36-55. 

Meyer, J.H.F. 1991. Study orchestration: the manifestatíon, interpretation and consequences 
of contextualised approaches to studying. Higher education 22: 297-316. 

SATAP 2007. Standardised Assessment Test for Access and Placement: Language. SATAP 
Development Group. 

Shivpuri, S., Schmitt, N., Oswald, F.L. & Kim, B.H. 2006. Individual differences in academic 
growth: Do they exist, and can we predict them? Joumal of college student development 47 
(1): 69-86. 

Stricker, L.J. 2004. The performance of native speakers of English and ESL speakers on the 
computer-based TOEFL and GRE general test. Language testing 21 (2): 146-173. 

UAL 2007. The Test of Academic Literacy Levels. Unit for Academic Literacy, University of 
Pretoria. Available at http://web.up.ac.za/default.asp?ipkCategoryID=2388&subid=2388 
&ipklookid=9 and http://web.up.ac.za/UserFiles/Sample%20Test%20TALL.pdf . Accessed 
17 September 2007. 

Van der Slik, F. & Weideman, A. 2005. The refinement of a test of academic literacy. Per 
linguam 21 (1): 23-35. 

Weideman, A. 2003. Assessing and developing academic literacy. Per linguam 19 (1 & 2): 55-65. 

Weideman, A. 2006. Transparency and accountability in applied linguistics. Southern African 
linguistics and applied language studies 24 (1): 71-86. 

Yeld, N. 2001. Equity, assessment and language of leaming: key issues for higher education 
selection and access in South Africa. Unpublished PhD Thesis, University of Cape Town. 


Functional multilingualism at the North-West liniversity: 
Communication difficulties in meetings 

A.S. Coetzee-Van Rooy 

North-West University (Institutional Office) 


Functional multilingualism at the North-West University: 

Communication difficulties in meetings 

Tlie new North-West University (NWU) came into existence on 1 January 
2004. The university has campuses in Mafikeng, Potchefstroom and in the Vaal 
Triangle (in Vanderbijlpark), and the institutional head office is in Potchef- 
stroom. The language contexts of the campuses differ substantially, and to 
accommodate this reality, the institution committed itself to functional multi- 
lingualism very early in the merging process. In 2007, Council approved a 
functional multilingual language policy and a language plan that stipulates 
how functional multilingualism is implemented in the domains of teaching and 
learning, research, the organized student life and administration. 

In this article I would like tofocus on the language challenges faced by the 
administrators at the NWU. The theoretical framework ofthe study is that of 
the ethnography of communication (Saville-Troike, 2003). Research questions of 
interest in this article are: (a) What are the experiences, perceptions and 
attitudes of administrative workers at the NWU towards the implementation of 
the functional multïlingual language policy and plan? (b) What could be 
learned from these experiences, perceptions and attitudes to improve the 
implementation of the functional multilingual language policy and plan at the 
NWU? The attempt of the mstitution at the implementation of functional 
multilingualism is unique in the higher education context in South Africa and 
it is argued that this courageous effort ofthe institution should be studied in a 
qualitative and longitudinal manner because it could provide insights into the 
way forward in the post-1994 South Africa where there is a struggle in different 
domains to come to terms with its multilingual context. 

The main findings of the article are that the complexity of multilingualism and 
the identities of multilingual language users should be considered carefully in 
the implementation process of a functional multilingual language policy and 
plan. The realities of the multilingual identities ofla?iguage users challenge 
seemingly "obvious" solutions (like the use of interpretation services at meet- 
ings) often used in multilingual contexts. A second finding is that the use of an 
ethnographic approach seems productive to unearth attitudes a?id perceptions of 
administrators struggling to work effectively i?i the context ofthis multilingual 
workplace which would not have been learned if more "qua?ititative" research 
?nethods were used exclusively. 


50 


Ensovoort: jaargang 1 1, nommer 2, 2007 


1 . Contextualisation and probiem statement 

The NWU is striving to manage the multilingual nature of its staff and stu- 
dents by establishing a language policy premised on functional multilingual- 
ism for the domains of teaching and learning, research, the organised student 
life and administration. The language policy statement related to the imple- 
mentation of functional multilingualism in the administrative context that was 
accepted by Council in March 2007 reads: "The implementation of functional 
multilingualism for working and administrative purposes happens in a sys- 
tematic and goal-oriented way. By means of a consultative process, strategies 
are lobbied and structures put in place in an ongoing way so as to implement 
functional multilingualism at the NWU workplace as optimally as possible, 
with due recognition of the language rights of stakeholders". The definition 
of functional multilingualism provided in the policy is: 

"(i)Functional multilingualism means that the choice of a particular lan- 
guage in a particular situation is determined by the context in which it 
is used, and that variables such as the purpose of the communication 
and levels of language proficiency of the interlocutors play a determin- 
ing role in the choice of a particular language code or language codes. 
The implication is that not all official languages need to be used for 
communicative purposes at the NWU but that sensitivity should be 
shown to the main regional languages used in provinces where cam- 
puses of the institution are situated. 

(ii) Multilingual refers to the use of more than one language, as well as the 
ability to use more than one language." 

The approved language policy and plan of the NWU can be accessed on the 
institutional website at http://www.nwu.ac.za/language/pdUtaalbeleid_e .pdf. 
The quotations from the policy used in this article were accessed from the 
website on 25 October 2007. 

This article does not focus on the challenges that the NWU faces to estab- 
lish functional multilingualism in the educational and research domains (lan- 
guage planning domains 1 and 2) or in the domain of student life (language 
planning domain 3) or improvement of language skills (language planning 
domain 4). The institution is already conducting groundbreaking work in the 
area of language planning domain 1, teaching and learning, with the imple- 
mentation of simultaneous interpretation in selected academic programmes. 
Van Rooy (2005) already reported research findings of a project that investi- 
gated the effectiveness of the simultaneous interpretation during contact ses- 
sions at the NWU. 

The focus of this study is on describing the challenges faced by administra- 
tors in the institutional and campus offices of the NWU when they attempt to 


Ensovoort: jaarganc 1 1, nommer 2, 2007 


51 


implement functional multilingualism in the language planning domains for 
horizontal and vertical administrative communication. This is described in 
language planning domain 5 in the approved policy and plan. The main aim 
of the article is to gather data in a longitudinal manner about the experiences, 
perceptions and attitudes of NWU staff in the process of establishing and im- 
plementing a functional multilingual language policy and plan with regard to 
administrative communication. The preliminary results reported in this arti- 
cle relate to verbal communication during institutional meetings at the NWU. 
A better understanding of the experiences, perceptions and attitudes of staff 
during this process could provide useful insight for language planners in gen- 
eral and the language pianners at the NWU in particular. A longitudinal study 
of the unique attempt by the NWU to establish and implement a functional 
multilingual language policy in the higher education context could provide 
valuable data on this part of the transformation agenda for higher education 
institutions in South Africa. 


2. Theoretical framework 

The research conducted in this study is done within the theoretical frame- 
work of the ethnography of communication (Saville-Troike, 2003). The eth- 
nography of communication is aimed at a study of "the structuring of com- 
municative behaviour and its role in the conduct of social life" (Saville-Troike, 
2003: 1). The ethnography of communication has two focal points: "the de- 
scription and understanding of communicative behaviour in specific cultural 
settings" and the "formulation of concepts and theories upon which to build a 
global metatheory of hurnan communication" (Saville-Troike, 2003: 1-2). 

The scope and focus of the ethnography of communication is demarcated 
by the answer to the following question: what does a language user need to 
know to communicate appropriately within a specific speech community? This 
knowledge is defined as "communicative competence" (Saville-Troike, 2003: 2). 

"The focus of the ethnography of communication is the speech communi- 
ty, the way communication within it is patterned and organized as systems of 
communicative events, and the ways in which these interact with all other 
systems of culture" (Saville-Troike, 2003: 2). "The communicative event is the 
basic unit for descriptive purposes. A single event is defined by a set of com- 
ponents throughout, beginning and involving the same general purposes of 
communication, the same general topic, and involving the same participants, 
generally using the same language variety, maintaining the same tone or key 
and the same rules for interaction in the same setting" (Saville-Troike, 2003: 
23). The speech community in this article is the staff at the NWU and the 
communicative events of interest are institutional meetings. 


52 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


3. Methodology 

In this study, it is accepted that, "Ethnography involves an ongoing attempt 
to place specific encounters, events, and understandings into a fuller, more 
meaningful context ... ethnography is the continuation of fieldwork rather 
than a transparent record of past experiences in the field" (Tedlock, 2003: 165). 
A key assumption in any ethnographic study is that prolonged interaction 
with people will lead to a better understanding of their beliefs, motivations, 
and behaviour than by using any other approach. In this study, it is assumed 
that prolonged interaction with administrative staff at the NWU will lead to a 
better understanding of their attitudes, beliefs, motivations and behaviour in 
terms of their participation in administrative communication in an institution 
where functional multilingualism is adopted. 

The main fieldwork methods utilised by ethnographers are also employed 
in this longitudinal study: observations, interviews and an analysis of rele- 
vant texts (Campbell & Gregor, 2003: 71-81; Miller et al, 2003: 219). In order to 
structure observation field notes effectively, a grid is used to capture field 
notes during observation (see Appendixes A, C and E as examples). These 
grids each attempt to assist the observer to capture both "holistic" and "micro- 
scopic" data of the relevant communicative events (Miller et al, 2003: 224). No 
attempts are made to capture full communicative events. In line with the fo~ 
cus of this study, communicative events relating to the experiences, percep- 
tions and attitudes of administrative staff at the NWU who are establishing 
and implementing a functional multilingual language policy and plan are cap- 
tured and analysed. 

Apart from observational notes, the researcher approaches individual mem- 
bers of staff who voiced a particular position that reveal their experience, per- 
ception and / or attitude towards the implementation of the functional multi- 
lingual language plan at institution meetings to conduct interviews with them. 
The participants are informed of the purpose of the study and their consent is 
obtained to report the data from the interviews anonymously. The researcher 
is in the process of obtaining approvaLfrom the institutional ethics committee 
for the longitudinal part of the study. 

An important methodological issue is that of participant observation. The 
fact that the researcher is a participant in the communicative events she is 
observing should not be glossed over. In line with the ethical critique of eth- 
nographers about the role played by the participant observer (Tedlock, 2003: 
179-180), the role, experiences, perceptions and attitudes of the researcher in 
the observation of participation in communicative administrative events will 
be noted and reported where appropriate. 

Interviews are conducted with relevant participants to (1) ensure the fac- 
tual correctness of the observer's field notes and transcripts related to the 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


53 


communicative event at question and (2) to get more information about a spe- 
cific communicative event. These interviews might be formal, scheduled events 
or informal discussions with participants as part of ordinary interaction (Camp- 
bell & Gregor, 2002: 77). Interviewees will be identified as the study progress- 
es based on their input at institutional meetings that relate to their experienc- 
es, perceptions and attitudes towards the implementation of the functional 
multilingual language policy and plan of the NWU. An attempt will also be 
made to include the voice of the "silent" participants in the meeting as part of 
the longitudinal study. 

Texts related to the establishment and implementation of a functional 
multilingual language policy (with a focus on the administrative context) at 
the NWU are identified and analysed. Ultimately, this longitudinal study hopes 
to provide a deeper insight into people's experiences, perceptions, attitudes 
and understanding of a functional multilingual language policy and plan at a 
university in South Africa. 

The research method applied in this article included the following "steps". 
The researcher attended institutional meetings as a mernber and observed the 
statements made by colleagues that related to their experiences, perceptions 
and/or attitudes towards the implementation of the functional multilingual 
language policy and plan at the institution. The observations were captured 
in the scherne presented in Appendix A during the meeting or directly after 
the meeting. After the meetings, the researcher approached the individual 
and requested an interview to talk about the comment/s made about the im- 
plementation of the functional multilingual language policy and plan by the 
colleague during the meeting. The interviews were structured as follows: (a) 
first of all, the researcher explained the purpose of the research project; (b) the 
permission of the interviewee was obtained to report the data from the inter- 
view anonymously; (c) the researcher shared the description of the observa- 
tion of the communicative event with the interviewee to check if the observa- 
tions were documented correctly by the researcher; (d) the interviewee was 
requested to explain her/his feelings during the time when s/he made the state- 
ment related to the language policy and plan of the NWU; (e) specific ques- 
tions related to the communicative event were added. 


4. Presentation of preliminary data 

For the purpose of this article, all the data that were obtained already as part 
of the study are not reported in this paper. The data reported in this article 
focus on an analysis of interviews with two participants about three cornmu- 
nicative events that they participated in. Due to the longitudinal nature of the 
project, more data will be added continuously until the themes that emerge 


54 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


from analyses of the data start to repeat. In terms of a qualitative approach 
towards research, this saturation point indicates that a sufficient number of 
communicative events have been analysed to allow the publication of valid 
and reliable results from the project. No claims are made about validity and 
reliability of results at this stage of the project. The presentation of prelimi- 
nary data rather aims to illustrate the usefulness of this approach in a broad 
attempt to gather data on language experiences, perceptions and attitudes of 
university staff that are implementing a functional multilingual language pol- 
icy at the NWU. 

4 . 7 Data collection 

The data discussed in this paper are presented in Appendixes A, C and E. This 
data should be seen as the input material for the interviews with the two partici- 
pants whose experiences, perceptions and attitudes are reported in this pa- 
per, as explained above in the methodology section. The input questions for 
the interviews with participants are presented in Appendix B, D and F. 

4.2 Interview data with participant 1 

The participant in Appendix A and C is the same person. Both communicative 
events reported in Appendix A and C were discussed with the participant dur- 
ing the same interview. The data related to the second participant is presented 
in Appendix E. The communicative events will be discussed separately. 

4.2. 1 Communicative event 1 - road shows (Appendix A and B) 

The context of the road shows is described in detail in Appendix A. The main 
elements to note are that the participant leads an institutional team in a road 
show on all campuses to explain the services, functions and ways of work of 
the institutional team. The information was offered in a PowerPoint presenta- 
tion and all team members participated in the presentation. In order to ac- 
commodate the multilingual nature of the different campuses, it was decided 
that participants would use Afrikaans slides on the Potchefstroom campus, 
and the participant in the research project used Afrikaans slides, but offered 
her presentation in English. 

4. 2.1.1 Assessment of correctness of description ofincident with the participant 
An important part of the methodology applied in this project, was receiving 
confirmation from the participants that the researcher documented the de- 
tails of the communicative event correctly. The details, as captured in Appen- 
dix A, were confirmed as correct by the participant and the researcher ob- 
tained the permission of the participant to report the data as part of the re- 
search project. 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


55 


4.2. 1.2 Description of feelings of participant during the road show presentation 

As explained briefly at section 4.2.1, the participant wanted to accommodate 
the multilingual nature and policy of the institution and therefore decided to 
offer her presentation on the Potchefstroom Campus in English, but to use 
Afrikaans slides. The input questions used in the interview are presented in 
Appendix B. The participant agreed that she struggled to co-ordinate her Eng- 
lish presentation with the Afrikaans slides. The participant agreed that this 
might have made her seem "clumsy" or "ill-formed" or "unconvinced" about 
the content of her message. The participant also agreed that this was an unde- 
sirable situation for her to be in, as institutional leader. Despite the difficulties 
experienced, the participant insisted that it was important to use Afrikaans at 
the road show session on the Potchefstroom campus, because it added "au- 
thority" to the leadership role the participant plays institutionally. It might 
have been better if an Afrikaans member of the team operated the Afrikaans 
slides on behalf of the participant. It was agreed that this would be done in 
similar situations in future. The participant was upset by the comment of the 
school director about the poor translation of the term "Programme Qualifica- 
tion Mix". It was translated as "programkwalifikasie-mengsel". The participant 
felt that the school director did not appreciate the efforts made by the institu- 
tional office to respect the language preferences of the Potchefstroom campus. 

4.2.2 Communicative event 2 - strategic planning session by extended 
senior management (January 2006) (Appendixes C and D) 

The full details of the context of this communicative event are presented in 
Appendix C. The input questions for the interview with the participant are 
presented in Appendix D. The communicative event took place at a senior 
management planning meeting at the beginning of 2006. During the meeting, 
a lot of Afrikaans was used by colleagues. At the end of the meeting the chair- 
person requested feedback on the planning session from all participants. The 
participant was the third last person to comment and in her comments she 
requested colleagues who use Afrikaans at these meetings to remember that 
there are second and third language users of Afrikaans at the meeting and 
that they should adjust the pace and choice of vocabulary to enable those 
colleagues to also follow their comments. A point of interest was that inter- 
pretation services were available at the meeting, but that the participant chose 
not to make use of the service. 

4.2.2. 1 Assessment of correctness of description of communicative event with the 
participant 

Again, the methodology applied in the project required of the researcher to 
confirm the correctness of the description of the communicative event (full 


56 


Ensovoort: iaarcanc 1 1 , nommer 2, 2007 


details in Appendix C) with the participant. In this case, the participant con- 
firmed that the communicative event was represented correctly in the field 
notes of the observer and she did not want to add or delete sections of the 
description of the event in the field notes. 

4. 2. 2. 2 Description of feelings of participant when utterance was made 

The main emotion that underpinned the utterance made by the participant 
that constituted the communicative event relevant to this paper, was that of 
anger. The participant argued as follows: "My English proficiency is much 
better than that of the colleagues around me. But I am not arrogant about it. I 
do not use big words or speak fast when I speak English at work, because I 
know that some colleagues might not follow me". The participant's feeling 
was that she took great care when she used English to speak slowly and to 
select words that less proficient English speakers would understand. She did 
not experience the same care when Afrikaans speaking people used Afrikaans 
at institutional meetings. This indicated to her that these speakers did not 
want to accommodate or reach out to her, and that made her feel angry. 

Furthermore, she argued that she went through the pain of using English, 
although it was not her mother tongue, to accommodate colleagues who did 
not understand her mother tongue. She felt great pity for people who had to 
use English while they were not proficient enough in English. But she experi- 
enced mother tongue speakers of Afrikaans who used Afrikaans at a fast pace 
and who did not carefully select their words to improve the chances of less 
proficient Afrikaans speaking colleagues to understand them, as arrogant. 

4. 2. 2. 3 Use of interpretation services by the participant 

In response to the question related to the use of the interpretation services, 
the participant maintained that she understood Afrikaans and therefore she 
did not need the interpretation services. She offered Afrikaans as a subject in 
her undergraduate and honours degrees and she stated that her reading and 
writing proficiency in Afrikaans were good. She agreed that her speaking and 
listening proficiency in Afrikaans were not good, because she did not have 
opportunities to practice it for many years„ 

She believed that her ability to understand Afrikaans contributed substan- 
tially to her acceptance by some colleagues at the institution. She maintained 
that her use of the interpretation services was unnecessary if colleagues that 
used Afrikaans took care when they used it to speak slowly and to select less 
"technical" or "academic" words. She argued that in our multilingual work- 
place, we should all be compromising to accommodate each other. She com- 
promised by using English and by speaking slowly and selecting her words 
carefully. When Afrikaans speaking colleagues did not do the same, she 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


57 


"switched off" and felt that she was not part of the conversation and deci- 
sions that followed any more. 

4.2.3 Communicative event 3 - Strategic planning session by extended senior 
management (November 2005) (Appendixes E and F) 

The details of the communicative event are presented in Appendix E and the 
input questions for the interview with the participant are presented in Ap- 
pendix E This was a two day planning meeting of the senior management 
team of the NWU that was held away from the campuses. Interpretation serv- 
ices were available and full group discussions as well as break away sessions 
into smaller groups took place. At the end of the planning meeting, the chair- 
person requested participants to share their views about the effectiveness of 
the session with the group. The participant raised the issue of the use of Afri- 
kaans at the rneeting as a problem. He communicated that he could not un- 
derstand why colleagues who are perfectly conversant in English continued 
to use Afrikaans at these meetings. He communicated frustration and anger at 
people who assumed that every person understood Afrikaans and he was 
particularly upset at the use of Afrikaans in the breakaway groups where the 
interpretation services were not available. 

4.2.3. 1 Assessment of correctness of description of communicative event with the 
participant 

The participant agreed with the correctness of the description of the commu- 
nicative event at the first interview with the researcher. After the interview, 
the researcher sent the notes to the participant and the participant had an 
opportunity to add to the description. 

4. 2. 3. 2 What did you feel when you raised the issue? 

The participant expressed frustration. The participant regarded English as the 
"operational language" in South Africa. I think a definition of "operational 
language" would be the language which all South Africans understand. I will 
need to clarify this definition with the participant in future. The participant 
was frustrated by the knowledge that many of the people who used Afrikaans 
in sessions were able to speak English. It did not make sense to the participant 
that they continued to use Afrikaans. 

The participant was aware that some participants claimed that they strug- 
gled to express themselves fully/with ease in English. The participant did not 
believe this claim. The participant said that many of them used English very 
well. It was no problem to the participant that colleagues used an Afrikaans 
word when they struggled to find the right English word. Another colleague 
usually helped the person out with the correct English word and the partici- 


58 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


pantbelieved that this practice was good for language learning. The colleague 
who struggled to find the word could learn a new English word and other 
colleagues could learn an Afrikaans word. 

Another line of thinking by the participant was that management should 
stop to waste so much money and resources to implement a functional multi- 
lingual language policy. In the opinion of the participant, management should 
decide that English is the operational working language for the institution. 
The participant observed that management had taken other "unpopular" de- 
cisions that did not necessarily reflect the view of all employees (e.g. about 
institutional credit weightings for modules), and that was fine, because in the 
end management had the mandate to take these decisions. The participant 
believed the same should be done with a management declaration that Eng- 
lish was the operational working language of the institution. 

The participant was frustrated by the institution's waste of money regard- 
ing its insistence on using the interpretation services at meetings (e.g. the sen- 
ior management meeting and the Institutional Senate meetings). The partici- 
pant said that very few people used the earphones when colleagues used 
English and the participant was one of about 3 people in these meetings who 
used the earphones when Afrikaans was used. The participant asked me if I 
knew how many people could speak ONLY Afrikaans at the institution, be- 
cause the interpretation service would be imperative for them. 

The participant was also frustrated by the "disruptive" effect that the inter- 
pretation service had for him at meetings. The participant was particularly 
frustrated when the interpreters were too slow (in his opinion) and the partici- 
pant had the experience that his concentration span was broken when he had 
to put on the earphones, and had to take them off again and vice versa through- 
out the meeting. 

4. 2. 3. 3 Why does it bother you that colleagues use Afríkaans? The interpretation 
services are supposed to ensure that we understand what is said at the 
meeting. What is your opinion about the use of the interpretation services? 
The participant shared an important part of his personal history to answer 
this question. Due to apartheid laws, the family of the participant relocated to 
England when he was about 3 or 4 years old. The p^rticipant's parents got 
divorced in England and the participant's father died before he was 8 years 
old. The participant stated that he did not have a pleasant childhood. These 
experiences came as a direct result of the apartheid policies in South Africa 
and Afrikaans was directly related to that. 

On the other hand, the participant stated that this experience also had 
positive results. It took him to other countries where he had opportunities to 
learn and study. The participant might not have had the same opportunities if 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


59 


the family stayed in South Africa. The participant tried to see the positive side 
to the experience as well. However, the participant had mixed feelings when 
he heard or thought about Afrikaans. 

The participant did not see the interpretation services as a tool that assist- 
ed in facilitating an effective work environment in a multilingual context. The 
participant was frustrated by the "disruptive" element of the interpretation 
services on his ability to participate in meetings and the expense of the inter- 
pretation services bothered him as well. According to the participant the in- 
terpretation services did not add anything to the effectiveness of the meeting. 
The participant's opinion remained that colleagues could all use English, and 
that meetings would be more effective if all colleagues used English. 

The participant also expressed that he did not believe it was sensible for 
colleagues to use any language other than English at the institution. Although 
he was able to speak SeSotho, he claimed that he resisted using it in class or 
meetings, because he deemed it an "unfit" thing to do at the institution. I am 
not sure what languages the participant can speak. It might be that time in 
exile resulted in him being able to speak English well, but had the detrimental 
effect of providing no opportunities to gain proficiency in other South African 
languages. If this is true, then any institutional attempt at multilingualism 
will disadvantage and "threaten" this participant. The researcher will have to 
explore this issue further with the participant. 


5. Discussion and interpretation of data 

As stated earlier in the article the data presented in this paper should be re- 
garded as preliminary data, due to the qualitative and longitudinal approach 
adopted for this study. No claims are made that the data presented in this 
paper would provide finite conclusions about this matter. The discussion and 
interpretation of data from the article is therefore brief and tentative. 

5. / Brief discussion and tentative interpretation of current data, as well as 
tentative implications of findings 

In terms of the data reported in this paper (presented in Appendixes A - F), at 
least three issues should be discussed. The first issue relates to the notion of 
"power" and language use. In both the communicative events related in Ap- 
pendixes A and C, the participant demonstrates that she is aware of issues 
related to language and power. In both instances, the participant expressed 
notions such as added "authority" to her institutional leadership and the no- 
tion of "investment" when she uses Afrikaans in meetings on the Potchef- 
stroom campus and institutional meetings. 


60 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


In contrast to this, the participant in Appendix E challenges the usefulness 
of Afrikaans in the institution. The painful personal experiences of the partic- 
ipant make it impossible for him to see any useful role for Afrikaans at the 
institution. This could be regarded as one of the mainstream views about the 
use of Afrikaans in post-1994 South Africa, and although applied linguists and 
language planners might view this as an unproductive manner in which to 
view the role of Afrikaans in the post-1994 South Africa, the views offered by 
this participant certainly could not be ignored by the NWU. Furthermore, it 
might be that the multilingual proficiency of this participant is lower than 
that of many of the colleagues around him and that the notion of "multilin- 
gualism" might seem "threatening" to him? 

What are possible implications from these tentative findings? The manag- 
ers of the language policy and plans of the NWU should take note of these 
attitudes and realise that even more communication about the benefits of bi- 
lingualism should be distributed in the institution. Furthermore, the institu- 
tion needs to assure colleagues who are not multilingual that they will not be 
excluded from participating in the operations of the institution. A possible 
perception of any "hegemony" of multilingualism might be detrimental in 
the institution's attempt to include all colleagues. 

Secondly, the perceived "language arrogance" of Afrikaans speakers using 
"technical" or "academic" words at a fast pace at meetings is an important 
notion to consider. The Afrikaans users at this specific meeting are probably 
not aware of the very explicit meaning this participant attaches to their use of 
"technical" or "academic" Afrikaans at a fast pace. They unconsciously proba- 
bly also assume that due to the availability of interpretation services, they do 
not need to adjust their language use at all, because non-Afrikaans speaking 
colleagues would receive appropriate interpretation in English. 

It seems as if these assumptions and the consequent arrangements made 
in terms of interpretation services belie the complexity of the levels of multi- 
lingualism of colleagues at the institution. An underlying assumption/percep- 
tion is that colleagues either understand Afrikaans fully, or do not understand 
it at all, therefore providing interpretation services is sufficient in implement- 
ing the functional multilingual language policy of the institution. 

An analysis of this communicative event indicates that this arrangement is 
not enough to facilitate participation at this multilingual workplace. This analy- 
sis seems to suggest that more should be done to sensitise and train colleagues 
at the institution about language habits and behaviour that would facilitate 
optimal participation of all members at meetings. One important notion that 
colleagues need to be aware of, is that individuals at these meetings display 
different levels of proficiency in terms of different languages. This is a given 
in any multilingual workplace. Apart from arranging for interpretation serv- 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


61 


ices at these multilingual institutional meetings, all colleagues attending these 
meetings should be sensitised and trained that we should all use a slower 
pace and select our words more carefully to enhance the opportunities of sec- 
ond and third language users to participate optimally at these meetings. Fur- 
thermore, this very reasonable request might enhance the ability of the inter- 
preters to conduct their jobs at an even higher level of excellence as well. These 
tentative results from this singular analysis of a communicative event at a 
multilingual workplace seem to suggest that there are training and develop- 
ment needs that should form part of the implementation plan for a functional 
multilingual language policy and plan at this institution. All colleagues should 
be sensitised and trained in terms of the pace and selection of words when 
they participate at these meetings. Furthermore, we need to spend more time 
to ensure that we provide our contributions in such a manner that the inter- 
preters, as well as second and third language users of languages used at the 
meeting, have the best possible chance to participate optimally at the meet- 
ing. 

6. Conclusions 

One of the tentative conclusions from this case study is that the complexities 
of the levels of multilingual language proficiency need to be recognised to 
design appropriate mechanisms and support structures to enable optimal par- 
ticipation of all colleagues in multilingual meetings. It seems that the use of 
interpretation services as the only strategy to facilitate participation in multi- 
lingual meetings, without considering the levels of different multilingual pro- 
ficiencies brought to the discussion table in multilingual settings, could have 
unintended and opposite effects on the levels of participation of colleagues. 
An acknowledgement of the different levels of language proficiency of multi- 
lingual colleagues raises the level of sophistication a multilingual institution 
should invest in if they are serious about implementing a functional multilin- 
gual language policy and plan. The need for sensitisation and training and 
development opportunities for colleagues participating in multilingual meet- 
ings seems imperative following on the analysis from this one case study. 

A second tentative conclusion is that a qualitative, longitudinal approach 
seems to be very productive to reveal the intensely personal and individual 
attitudes, experiences and perceptions of language users that influence their 
language behaviour, as demonstrated by communicative events in which they 
participate. Along with other methods and approaches, data gathered in this 
manner might provide language planners and managers at universities in 
South Africa with relevant conclusions upon which decisions for language 
planning could be based. 


62 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


Ultimately, insights from this study that aims to gather data in a longitudi- 
nal manner could be used by language planners and language planning theo- 
rists to adjust existing assumptions about the pros and the cons of language 
planning in multilingual contexts. Eventually, this data would provide insight 
into the effect of implementing a functional multilingual language policy and 
plan on the nature and level of participation in this university as a workplace. 
This brave attempt by the NWU to implement a functional multilingual lan- 
guage policy and plan deserves the careful attention of a longitudinal study 
of this nature. 


Bibliography 

Campbell, M. & Gregor, F. 2002. Mapping social relations: A primer in institutional ethnography. 

Aurora, Ontario: Garamond Press. 

February 2006. Towards an institutional language plan at the NWU. 

Miller, EJ., Hengst, J,A., & Wang, S. 2003. Ethnographic methods: Applications from Develop- 
mental Cultural Psychology. In: Camic, PM., Rhodes, J.E. & Yardley, L. (eds). 2003. 
Qualitative research in psychobgy: Expanding perspedives in methodology and design. Washing- 
ton DC: American Psychological Association. 

Saville-Troike, M. 2003. The ethnography of communication. Third edition. Oxford: Blackwell 
Publishing. 

Tedlock, B. 2003. Ethnography and ethnographic representation. In: Denzin, N.K. & Lincoln, 
Y.S, (Eds). 2003. Strategies of qualitative enquiry. Thousand Oaks: SAGE publications. 

Van Rooy, B. 2005. The feasibility of simultaneous interpreting in university classrooms. 
foumal ofSouthern African Linguistics and Applied Language Studies, 23(1): 81-90. 


Ensovoort: jaargang 1 1, nommer 2, 2007 


63 


APPENDIX A - Road shows by the NWU 


Type of event: 

Date: 

Venues: 

Nr of people: 

Road shows by division of an institutional manger at the NWU 

7, 10 and 17 March, 2005 

Senate Halls and Lecture Hall 

About 50 on Potchefstroom Campus; 

about 25 on Mafikeng and Vaal Triangle Campuses 

Role/relationship 
of observer in event: 

Member of Extended Senior Management team; 
participant in the presentation 

Category 

Field notes 

Scene 

Type of event: A meeting (information session & discussion) 

Topic: Information about functions of institutional office 

Purpose: To inform academic managers about the following: 

i. the gist of functions that will be performed by this institutional office 

ii. the division of functions related to this institutional office, that is between 
campus and institutional levels 

iii. clarifying the communication channels between the officers in the 
institutional office and campuses related to this portfolio. 

Setting 

Potchefstroom campus (7 March 2005, 09:00-11:00): 

The meeting was held in the Joon van Rooy Senate Hall which is a formal lecture 
room with fixed seats, a fixed front desk and fixed data projection facilities. 

Vaal Triangle campus (10 March 2005, 10:00-12:00): 

The meeting was held in the Old Mutual lecture room which is a venue with fixed 
seats, a fixed front desk and fixed data projection facilities. 

Mafikeng campus (17 March 2005, 10:00-12:00): 

The meeting was held in the Senate Room which is a venue with fixed tables and 
seats, arranged in the form of an oval, with fixed data projection facilities. 

Emotional tone 

of the event 

Potchefstroom campus: It was a serious meeting, with some hostile and some 
supportive elements. About 50 academic managers (white, majority male) attended 
the meeting. The campus rector (white female) was present. There was engagement 
with the ideas in the presentation. A discussion of about 30 minutes was conducted 
after the presentation. A dean (white male) was unhappy after the presentation about 
the possible extra cost for the numerous staff that needs to be appointed in the 
institutional office, and there was a remark by a school director (white male) about a 
term on the Afrikaans slides of the IM member. The school director felt the translation 
was poor. The terms in question were: “Programme Qualification Mix" that was 
translated as “Programkwalifikasiemengser. The campus rector joked and replied 
that in future we will continue to use “Programme Qualification Mix”. Two colleagues 
(white females) were supportive about an increase in staff in the institutional office. 
They argued that we need to expand the staff complement even further, because the 
work planned by this institutional division was important and the workload was too 
heavy for the staff suggested. 

Vaal Triangle campus: It was a serious meeting, attended by about 25 people 
(white, majority male), with no discussion after the presentation. After we prompted 
some discussion, the campus rector replied that the ideas presented will be 
discussed on relevant internal campus forums. One colleague did offer some input: 
he raised the idea of institutional budgets (as opposed to campus budgets) for some 
functions. 


64 


Ensovoort: iaargang 1 1 , nommer 2, 2007 


Participants 


Message form 


Message content 


Perception of the 
effectiveness of the 
communicative event 


Mafikeng campus: It was a serious meeting, attended by about 25 people (black, 
majority male), with a fair amount of discussion after the presentation. The tone of 
the discussion was constructive and critical. It was constructive in the sense that 
campus colleagues agreed with the general gist of the presentation, but it was also 
critical, because colleagues requested some information about service level 
agreements. They were concerned that service delivery of functions performed by an 
institutional office on campuses might be delayed and they requested information 
about the mechanisms and procedures the institutional office plans to put in place to 
minimize this concern. 


The staff from the institutional office included an IM member (a black woman) and 
five directors (1 black woman, 1 white woman and three white men) that operate on 
institutional level in the portfolio of the IM member.The participants in the audiences 
were the academic managers on all campuses. The Campus Rector, Vice-Rector: 
Academic, Deans, School Directors / Heads of Department, Campus Registrars and 
relevant campus academic support service staff were invited. On the Mafikeng 
campus the majority of the audience was comprised of black men; on the 
Potchefstroom campuses the majority of the audience was comprised of white, 
Afrikaans-speaking men and women; and on the Vaal Triangle campus, the majority 
of the audience was comprised of white, Afrikaans-speaking men. 


Spoken South African English, supported by PowerPoint slides, prepared for each 
campus. The main decision about message form was which language/s should be 
used for the verbal presentations and on the PowerPoint slides? The institutional 
office decided to develop English PowerPoint slides, with an Afrikaans translation of 
the IM member’s section to be used on the Potchefstroom campus. It was decided 
that the IM member will use Afrikaans slides at Potchefstroom, although she will 
present in English, and that the rest of the division will use English slides at 
Potchefstroom, but that we will present in Afrikaans. On the other two campuses 
(Mafikeng and Vaal Triangle), we all used the English slides, and we presented in 
English as well. There is no budget or other internal support for the translation of 
documents used by the institutional office. Therefore, I offered to do the translation of 
the IM member's slides into Afrikaans. I had very little time to do this, and eventually I 
ended up doing it during an institutional senate meeting. I knew the translation was 
not perfect, and I struggled to find Afrikaans terms for national bodies such as the 
HEQC and terms used by the Department of Education, like PQM. I was seated next 
to the campus rector of the Potchefstroom campus and she assisted me with the 
translation of some terms during the senate meeting. 


A copy of the PowerPoint presentation slides is available for interested colleagues. 

A summary of the message content would be: 

• Members from the institutional office wish to inform campus academic managers 
of their vision of the work that must be conducted by this particular institutional 
office; and 

• Campus academic managers should respond critically to these suggestions by 
providing input that would tailor-make the functions performed by the institutional 
office for their campus. 


Potchefstroom campus: From the perspective of the IM office, the communicative 
event was successful. The audience interacted with the institutional office in the 
discussion about the proposal and there was very little disagreement with the 
proposal. The negative issues that were raised (e.g. increased expenses because of 
huge institutional staff complement) could be addressed (there are no new positions 


Ensovoort: jaargang 1 1, nommer 2, 2007 


65 


created for the institutional office, existing positions from the campuses are migrated 
to the institutional office), and the positive suggestions / input was added to the final 
proposal.lf I consider the effectiveness of the decisions that were taken about 
message form (translation of the IM member’s slides into Afrikaans, while she 
presented in English; and the English slides of the other presenters, used while they 
presented in Afrikaans), I don't think the communicative event was wholly successful. 
The IM member is a seasoned public presenter that uses PowerPoint well. She was 
very uncomfortable with the Afrikaans slides while she presented at the 
Potchefstroom campus. She struggled to synchronize her English verbal 
presentation with herAfrikaans slides. This made her seem “clumsy” or “ill-formed” or 
"unconvinced” about the content of her message. This is a dangerous situation for 
her to be in, as institutional leader. She is the first black female institutional leader 
that has to interact with the dominantly white, Afrikaans, male academic community 
on the Potchefstroom campus. Particularly at the beginning of the merging process, 
she cannot afford to appear“clumsy” or "ill-informed”, because these members might 
decide that she is "incompetenf and therefore choose to ignore her or patronize her 
in future. The negative remark from a director about a term that was translated 
“incorrectly” according to him, also puts a question mark on the possible success of 
the decision to translate the IM member’s slides into Afrikaans forthe Potchefstroom 
presentation. 

Vaal Triangle campus: I am not sure that the communicative event was successful 
from the perspective of the institutional office. The audience did not engage with the 
proposal, because there was no immediate discussion after the presentation. The 
single request from a campus manager (for institutional budgets for certain activities) 
was added to the final proposal. However, in follow-up meetings with the campus 
rector, dean and chief director for infrastructure and facilities, it became clear that the 
message was internalised, that it was discussed at other internal forums at that 
campus, and it resulted in fairly serious human resources activities to restructure 
some campus staff so that campus functions proposed by the IM member and 
institutional team could be effected. The communicative event was therefore very 
successful. If I consider the effectiveness of the decisions that were taken about 
message form (using English slides and presenting in English), I think the 
communicative event was fairly successful. It seems that colleagues had to 
intemalize the message first, and that they could not react to it immediately. 

When they did react later (after internal discussions), it reflected their deep 
understanding of the message. The influence of using only English (on the slides and 
in the presentation) should be investigated in future as an explanation for the lack of 
initial, immediate response to the presentation on this campus. 

MafiKeng campus: I am not convinced that the communicative event was successful 
from the perspective of the institutional office. The audience engaged with the 
proposal in a critical, but constructive manner that seemed to indicate that they 
understand the proposal and that they are willing to assist with its implementation, 
provided that service level agreements are stipulated. However, when I follow-up 
discussions with academic managers on the Mafikeng campus, it was clear that they 
did not understand all the nuances in the proposal. I had to spend quite a lot of time 
to make sure that all the details of the proposal are understood. If I consider the 
effectiveness of the decision that was taken about message form (using English 
slides and presenting in English), I am not sure what the possible effect of using 
English is on the delay in information processing that I perceived in the follow-up 
discussions with academic managers. The influence of using only English (on the 
slides and in the presentation) should be investigated in future as an explanation for 
seeming delay in processing of the details of the message on this campus. 


66 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


What is learned about 

functional 

multilinguaiism? 

a) Revisit the idea to put the IM member in the difficult position to use Afrikaans 
slides while she presents in English. It might make her seem “clumsy" and this may 
damage the impact of the message she brings, particularly because she is one of the 
first black female managers that addresses the mainly white, Afrikaans academics on 
the Potchefstroom campus. 

b) Don't try to accommodate the Afrikaans audience by translating your slides into 
Afrikaans yourself. They do not appreciate the effort, they are critical of your attempts 
without offering better translations. It might be better to apologise for the fact that you 
conduct your presentation in English at the beginning, and to continue in English. 

c) Or, as an alternative to (b), find appropriate mechanisms and processes to inform 
Afrikaans-speakers of the fact that the institutional office would like to provide 
documentation in Afrikaans and English, but that there is no budget or other support 
for this decision, and therefore, the audience should please be tolerant with possible 
errors that might occur and the audience should know that the institutional office 
would appreciate constructive solutions to possible inappropriate translations. 

Other comments 

None 


APPENDIX B - 

Specific communicative event St participant information 

Date: 

Venue: 

Nr of people: 
Participant 1: 

7 March, 2005 

Senate Hall (Potchefstroom campus) 

About 50 

Institutional Management (IM) member 

Gender: Female; Race: African; Age: 49 

Partícipant 2: 

Gender: n/a; Race: n/a; Age: n/a 


Relatíonship 
between partícipants 

& observer: Observer reports to IM member 


Participant 

Information about dialogue / communicative event 

Field notes about dialogue 

Observer 

No specific dialogue relevantto the communicative event. The issue the observer 
discussed with the participant related specifically to the road show conducted on the 
Potchefstroom campus: how effective was the choice of the team to provide 
PowerPoint slides in Afrikaans for the IM member, while she used English to present? 
The field notes in Appendix A that present a reflection on the effectiveness of the 
communication at the event provides the input for the interview with participant 1. 

Other comments / 
field notes / questions 
to ask participant 
in interview 

a) Ask participant to relate incident first. Then check if my field notes are correct. I.e. 
observer experienced that participant struggled to present her presentation from the 
Afrikaans slides? Is this true? 

b) Explain your feelings when you made the utterance / during the presentation? 


Ensovoort: jaarcang 1 1 , nommer 2, 2007 


67 


APPENDIX C - Planning meeting of Senior Management of NWU 


Type of event: 

Date: 

Venues: 

Nr of people: 
Role/relationship 
of observer in event: 

Formal planning meeting of Extended Senior Management of NWU 
23 January, 2006 

Transnet Room (Potchefstroom) 

About45 

Member of Extended Senior Management team; attending with line 
manager & other peers in portfolio of line manager 

Category 

Field notes 

Scene 

Type of event: Formal meeting called by Vice-Chancellor 

Topic: Campus feedback on Institutional Office & Discussion of IP 

Purpose: (a) Allow campuses an opportunity to discuss frustrations/concerns about 
services delivered by Institutional Office 
(b) Discussion of IP by Institutional Office staff 

Setting: Formal conference venue, chairs organised in horse shoe shape, no 
“seating arrangements”, IP provided as document for discussion before the b'me 

Emotional tone 

of the event 

Discussion a: campuses openly discussed their frustrations/concerns about 
services / support delivered from the institutional office. Campus participants were 
clear and coristructive. Some institutional officers struggled to not "defend” them- 
selves. VC was clear only clarifying questions asked. It was a very productive session. 

Discussion b: discussion of IP by institutional office. Boring session, not structured 
well, not very productive. 

Participants 

Chairwas the VC (1) 

Institutional Management (IM) members = VP, campus rectors, directors (7) 
Institutional colleagues reporting to IM members (about 20) 

Campus management representatives (about 15)lnterpreters (2) 

Message form 

Mainly spoken SA English & Afrikaans. Interpretation available from Afrikaans to 
English. IP written in English. 

Message content 

(a) Campuses raise their frustrations / concerns about services/support delivered by 
institutional office. 

(b) Institutional office discussed campus issues & possible action plans (after campus 
representatives left the meeting) and discussed the IP 

Nature of participation 

(a) Campus representatives participated openly, fully, effectively. It was clear that 
there was great appreciation for the opportunity to discuss frustrations/concerns. 

A mix of Afrikaans/English was used. 

(b) Institutional office discussion of IP - “difficulf to participate: 

- because one had to ask clarifying questions and not “defend" and 

- because all members do not know IP equally well & aim was unclear 

Perception of the 
effectiveness of the 

communicative event 

During the event, colleagues mostly used English. There were one or two colleagues 
who used Afrikaans throughout the session as well. At the end of the session, when 
colleagues reflected on the success of the meeting, many colleagues started to use 
Afrikaans all of a sudden. Interpretation services were available. 


68 


Ensovoort: iaargang 1 1 , nommer 2, 2007 


What is learned about 

functional 

multilingualism? 

It seems that many Afrikaans colleagues are becoming used to discuss relevant 
business aspects in English. However, when they are asked to reflect on the 
meeting, many of the colleagues resorted to Afrikaans. It could be that the skill of 
reflection requires a higher order thinking/cognitive skill and that Afrikaans 
colleagues are still more comfortable in Afrikaans when they have to conduct such 
a task? 

Other comments 

A lot of English used, selected and announced Afrikaans.End of meeting when VC 
requested members to reflect on session, a lot of Afrikaans. 


APPENDIX D - Specific communicative event St participant information: 


Type of event: 

Date: 

Venue: 

Nr of people: 

Name 1 (post): 

Name 2 (post) 
Relationship 
between Name 1 & 

2 & Observer: 

Specific communicative event 

23-01-2006 

Transnet (Potch) 

About 45 

Partídpant 1 - Gender: Male; Race: White; Age: 50+ 

Partídpant 2 - Gender: Female; Race: Black; Age: 50 

2 Reports to 1 - Observer reports to 2 

Participant 

Information about dialogue: 

Field notes about dialogue 

1 

Invited all participants at the end of the meeting to reflect on the usefulness of the 
session. Almost all participants that were Afrikaans used Afrikaans, contrary to their 
behaviour earlier in the meeting where many of them used English. The participant 
was about the 3 ,d last person to reflect. 

2 

1 have great appreciation for the fact that we all have different mother tongues. 1 also 
understand that people use their mother tongues when they start to relax. But 1 do 
want to seriously request members to please keep in mind that we do not all share 
the same mother tongue. If members could please remember to speak slowly and 
clearly and choose their vocabulary carefully when they use their mother tongues in 
the presence of non-mother tongue users, it will be appreciated. 

1 

Acknowledged the remark and agreed. 

Other comments / field notes / questions to ask participant in interview 


a) Ask participant to relate incident first. Then check if my field notes are correct. 

b) Explain your feelings when you made the utterance. 

c) Why did you not use the interpretation services? If I remember correctly, you did not even take a set 
of ear phones at the beginning? 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


69 


APPENDIX E - Formal planning meeting of Extended Senior Management of NWU 


Type of event: 

Date: 

Venue: 

Nr of people: 
Role/relationship 
of observer in event: 

Formal planning meeting of senior management 

21-11-2005 

Crystal Springs conference cenhe (Rustenburg) 

About 45 

Member of extended senior management 

Category 

Field notes 

Scene: 

Type of event: Formal strategic planning meeting called by Vice-Chancellor 

Topic: Strategic priorities for 2006 

Purpose: An opportunity for Extended Senior Management of the NWU to 

discuss strategic priorities for 2006 

Setting: Conference venue away from campus where all colleagues stayed 

for 2 days (sleeping over for 1 night) 

Emotional tone 

of the event 

It was the meeting of this nature of the extended senior management meeting, 

There was a sense oí excitement, but colleagues were also still getting to know each 
other. The deans from the Potchefstroom campus came to the meeting prepared to 
take stands on some issues which they regarded as important. The deans from the 

Mafikeng campus had a very successful strategic planning session for research 
development in the previous week and also communicated a sense of shared views 
about certain matters. 

Participants 

Chairwas the VC (1) 

Institutional Management (IM) members = VP, campus rectors, directors (7) 

Institutional colleagues reporting to IM members (about 20) 

Campus management representatives (about 1 5) 

Interpreters (2) 

Message form 

Mainly spoken SA English & Afrikaans. 

Interpretation available from Afrikaans to English. 

Documentation was in English, 

Message content 

Due to the confidentiality of these types of meetings, no details about the message 
content will be shared. In general, the message content included: 

(a) discussion of the strategic priorities of the NWU for 2006 

(b) feedback from the most recent Council meeting 

(c) feedback from the “What Works” survey 

(d) small group discussions about specific elements of the “What Works" survey 
to discuss interventions / recommendations 

Nature of participation? 

In general there was a spirit of collaboration and co-operation, 

At various points, the deans made sure that they drove home the issues they believe 
are important. 

Perception of the 
effectiveness of the 

communicative event? 

During the event, the chairperson continued to use Afrikaans and English 
alternatively for consecutive agenda items and the interpretation services were 
available throughout the planning session. During the two days, it became clear 
that some Afrikaans colleagues struggle to express themselves eloquently in English. 

Choice of language use during the break away sessions seemed to have been 
problematic? The comments made by the participant at the end of the session is 
proof of this, as well as the observer's own discomfort in her break away session. 

(In the observer’s break away session there were six participants - FV, ALC, MNT, 


70 


Ensovoort: iaargang 1 1 , nommer 2, 2007 



ASCVR, TC, MV - and only one of the participants did not use Afrikaans as a mother 
tongue. During the session, the Afrikaans colleagues continually spoke Afrikaans, 
although they were quite aware that one member do not speak Afrikaans that well 
and that there were no interpretation services. The observer tried to interpret some of 
these Afrikaans statements in generai so that one member would not be excluded, 
but this was an “awkward" thing to do, so she did not interpret all that was said.) 

What is learned 

about functional 
multilingualism? 

It seems that small group communication might be problematic in a context where an 
institution adopted a functional multilingual approach. If the interpretation service is 
not available in every break away session, it seems that more guidelinesflanguage 
table manners" need to be developed and shared so that all participants could 
benefit from the small group discussion as well? Communication in the larger groups, 
where interpretation services were available, proceeded with apparently more 
success? 

Other comments? 

A lot of English used, selected and announced Afrikaans. The communicative event 
of interest occurred at the end of the meeting when participants reflected on the 
success/effectiveness of the meeting. 


APPENDIX F - 

- Specific communicative event sc participant information 

Type of event: 
Date: 

Venue: 

Nr of People: 
Participant 1: 
Participant 2 

Specific communicative event 

21 November 2005 

Crystal Springs Conference Venue 

About 45 

Dean of a Faculty - Gender: Male; Race: ?; Age: ? 

Not applicable - Gender: N/A; Race: N/A; Age: N/A 

Relationship 

Observer supports academic managers with academic staff 


between participants development and student academic development matters on 
and observer an institutional level 


Participant 

Information about dialogue: 

Field notes about dialogue 

1 

1 do not want to upset people at the end of the session. But 1 feel 1 have to say 
something about language. 1 do not understand why people who can speak English 
insist on using Afrikaans in this meeting? 1 appreciate the interpretation services but 

1 struggle to understand why people who can speak English insist on using Afrikaans 
at meetings? 1 was a member of a smaller group yesterday at the break away 
sessions. When 1 walked into the room, somebody was telling a joke in Afrikaans. 

The person saw me enter and continued to tell the joke in Afrikaans. 1 experienced 
this as rude. They did not even attempt to include me by translating the joke. 

Other comments / field notes / questions to ask participant in interview 

Note: 

The chairperson of the meeting did not attempt to answer the participant/to provide 
reasons/conclusions to the discussion. The issue was raised (as many other issues 
were at the end of the session) and not really dealt with by the meeting. 

Questions to 
participant: 

1 ) Did 1 capture the incident correctly? How could 1 improve the accuracy? 

2) What did you feel when you raised the issue? 

3) Why does it bother you that colleagues use Afrikaans? The interpretation 
services are supposed to ensure that we understand what is said at the meeting. 
What is your opinion about the use of the interpretation services? 


Test efflciency and utility: longer or shorter tests 

]urie Geldenhuys 
University of Pretoria 


Test efficiency and utility: longer or shorter tests 

Questions on test efficiency and utility, though often neglected, remain critically 
important considerations for test design. Conventional wisdom is that the longer 
the test, the more reliable it potentially is. However, while tests can be both valid 
and reliable and in that sense efficient, if they are too long that might make them 
less useful. A test has low utility if it is too long or requires elaborate logistical 
and administrative arrangements. 

This paper will explain to what extent using a shorter test would possibly 
limit its efficiency on the one hand, while, on the other, make it more useful. This 
issue will befurther elaborated through the use ofexamples takenfrom the tests of 
academic literacy levels (TALL)from the University ofPretoria. The method used 
for comparing the shorter and longer versions ofthese tests will be statistical 
measures relating to the reliability (alpha) ofeach version. An argument will be 
proposed ofhow large a drop in reliability may still be acceptable for a shorter test. 


1 . Introduction 

If we characterize language tests and, more specifically, tests of academic literacy 
as applied linguistic instruments, we may argue that they need to conform to the 
conditions that apply to the development of responsible applied linguistic de- 
signs. This entails that language tests should be both valid and reliable. These 
conditions are indeed necessary for such tests, as is their theoreticai defensibility 
or so called construct validity. 

Though often neglected, questions on test efficiency and utility also remain 
critically important considerations for test design. Conventional wisdom is that 
the longer a test is, the more reliable it potentially is. However, whiie tests can be 
both valid and reliable and in that sense efficient, if they are too long, that may 
make them less useful. A test has low utility if it is too long or requires elaborate 
logistical and administrative arrangements. 

The aim of this article is to explain whether using a shorter test would possi- 
bly limit its efficiency on the one hand, while, on the other hand, make it more 
useful. This issue will be elaborated with reference to the design of tests to meas- 
ure academic literacy levels, and spedfically the focus on test utility that present- 
ed itself when we started developing a postgraduate test of academic literacy 


72 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


(TALPS) for use at the University of Pretoria, as well as at other interested academ- 
ic institutions for higher learning in South Africa. 

The aim for developing TALPS was more spesifically to test the academic 
literacy of postgraduate students who enrol at the University of Pretoria for a 
postgraduate degree. The original format was more or less equivalent to that of 
TALL (test for academic literacy levels) which is administered to first year stu- 
dents at the beginning of each academic year at three universities in South Afri- 
ca., but was extended to include a section on text editing as well as a writing task, 
both of which are not included in TALL. The level of the postgraduate test is also 
higher in the sense that it includes reading passages that are more difficult than 
those in TALL. The first draft of the test comprised of 173 items. By means of two 
rounds of piloting, the number of questions was first reduced to 150 items and 
then to 120 items. The intention is to shorten the test to a final draft which will 
eventually count out of 100 marks consisting of 20 marks for a written text and 80 
marks multiple choice questions. 


2. Test length 

The length of a test depends on its purpose and the statistical properties of the 
items (Owen & Taljaard, 1996:22). If the items are heterogeneous more items are 
needed than when the items are of a homogeneous nature. The more heterogene- 
ous items there are, the less reliable the test potentially becomes. One can state 
here that in the case of tests of academic literacy, items, all being language related, 
probably are of a more homogeneous nature, in a way making the tests more 
reliable. As Van der Slik and Weideman (2005) have pointed out, however, the 
richer the construct of a test, the more heterogeneity the test designer may have to 
tolerate. Compare, for example, the following factor analysis of one of the pilots 
for TALPS (Figure 1). 

Here it is evident that there is some measure of heterogeneity: items 73, as well 
as 62 - 66 seem to fall outside of the area covered by the others. However, overall 
the reliability of the test (as measured in terms of Cronbach's alpha), was calculat- 
ed at 0.85, which is rather high for a pilot. In light of the factor analysis the test 
items are not as heterogenous as one may have expected. Most cluster around the 
right end of factor one. This means that the items are more homogenous and 
therefore probably support the idea of a shorter test. Moreover, the length of a test 
also depends on the time available for administering the test. In addition, the 
form and content of the items, as well as the difficulty level and the time needed 
to read the items, all contribute in determining the length of a test (Owen & 
Taljaard, 1996:23). 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


73 


TiaPlus Factor Analysis: Subgroup 0 - Subtest 0 



Factor 1 

Figure 1: Measure of homogeneity/heterogeneity of TALPS first pilot 


3. Administering tests 

Weideman (2006a: 83) points out that, since every test has to be implemented, its 
design and development 

anticipates its contextualization within some social environment, and the way it 
will operate and regulate the interaction between test designers, test adminis- 
trators, test takers (testees), administrative officials, lecturers, and others in- 
volved. This is the social dimension that is unique to each implementation (or 
administration, to use testing terminology) of the test, and it expresses for this 
particular case the relation between the technical and social aspects of our world. 

At the beginning of each academic year one working day (eight hours) is set aside to 
administer the test of academic literacy levels (TALL) to approximately 4400 stu- 
dents at the University of Pretoria alone. This is over and above the Afrikaans 
version, TAG, or Toets van Akademiese Geletterdheidsvlakke, which is administered to 
approximately 2600 students, bringing the total to about 7000 students. Administer- 
ing a test to such a number of students is a mammoth task and places a huge admin- 
istrativeburden on the personnel involved. Other than administrative constraints, 
logistical factors play a roll as well, for there are only a limited number of large 
venues at the university in which the test can be administered. Weideman (2006a: 
83; 2007b) notes that there are, amongst other things, a number of trade-offs between 
the technical utility of a test and its reliability. A test may be so short that it has high 
utility, but perhaps too short to achieve the desired level of reliability. Thus: 


74 


Ensovoort: jaarganc 1 1 , nommer 2, 2007 


The utility of a test requires that the test designer should carefully weigh a 
variety of potentially conflicting demands, and opt not only for the socially 
most appropriate, but also for a frugal solution. The various trade-offs ... that 
present themselves to test designers, not only between conflicting sets of po- 
litical interests, but also between reliability and utility [generate a need] to 
weigh or assess, to harmonise and then justify a tough and responsible techni- 
cal design decision. (Weideman 2006a: 83-84) 

Taking the above mentioned into consideration, the question could be asked: 
could a shorter test add to the utility of the test, even though it may probably 
lessen the efficiency? 

Before elaborating further on this issue, I would like to give the original argu- 
ment for a shorter test in the case of TALPS. When we started constructing a test 
for postgraduate students, the test consisted of about 173 items and would take 
students approximately two and a half hours to complete. Given the potentially 
heterogeneous test population - students that register for postgraduate studies at 
the University of Pretoria vary from those who are very literate in English to 
students who come from, for example, French-speaking African countries and 
countries in Asia and who are not that literate in English - we needed an instru- 
ment to give a first, rough indication of level in order not to waste the time of 
those competent enough to make a success of their studies. The line of thought 
therefore was to administer a shorter test to all candidates, and after receiving the 
results of this test administer the longer test to candidates who were potentially at 
risk of not completing their studies in the desired time. 

To answer the question of whether a shorter test is possible, I will in the first 
place give a summary of the grounds on which our test of academic literacy levels 
is conventionally set, and the components that usually make up the test. Then I 
will state an argument for omitting some of these components in order to shorten 
the test. Having taken a decision on which components to consider using for a 
shorter test, these components will be correlated with one another and with the 
test total. The initial thinking was that, if there is a good correlation between 
some of the components and the test total, it could be argued that the test could be 
shortened by including only these specific subtests. 


4. Validity, reliability and the construct 

According to Weideman (2006a: 74), language testers traditionally have attempt- 
ed to ensure faimess in language testing by designing tests that are both valid and 
reliable. Henning (1987:89) defines validity as follows: 


Ensovoort: jaargang 1 1, nommer 2, 2007 


75 


Validity in general refers to the appropriateness of a given test or any of its 
component parts as a measure of what it is purported to measure. A test is said 
to be valid to the extent that it measures what it is supposed to measure. It 
follows that the term valid when used to describe a test should usually be 
accompanied by the preposition for. Any test then may be valid for some 
purposes, but not for others. 

There are different types of validity, such as empirical validity, face validity and 
construct validity. 

Empirical validity is ensured by setting strict statistical parameters for the 
various items in a test. Items should only become part of a test after a reasonable 
amount of piloting which ensures that each item measures what it is supposed to 
measure. It must therefore discriminate well between those whose total scores fall 
into the top quartile and those whose total scores fall into the bottom group. The 
parameter for this discrimination value of an item is normally set between 0.3 and 
1 on an index from 0 to 1. To arrive at the specific values for each test item test 
designers should trial and evaluate each item as well as the test as a whole. 

The face validity of a test refers to the way in which it impresses or fails to 
impress a lay person, but also the way in which the test appears to be valid for the 
users of the test. Tests that do not appear to be valid to users may not be taken 
seriously for their given purpose. If, however, test takers consider a test to have 
face validity, they are more likely to perform to the best of their ability and re- 
spond appropriately to items (Alderson, Clapham & Wall, 1995; Weideman, 2006a). 

A test should also have construct validity. The latter is an analysis that indi- 
cates whether the theory or analytical definition or construct that the test design 
is built upon, is valid. Alderson, Clapham and Wall (1995:183) state that construct 
validity can be explained as follows: 

The term construct refers to a psychological construct, a theoretical conceptu- 
alization about an aspect of human behaviour that cannot be measured or 
observed directly... Construct validation is the process of gathering evidence 
to support the contention that a given test indeed measures the psychological 
construct the makers intend it to measure. 

The Test of Academic Literacy for Postgraduate Students (TALPS) that is being 
developed at the University of Pretoria sets out to measure academic literacy in 
terms of a definition (Weideman, 2007a: xi) that assumes that students who are 
academically literate should be able to: 

• understand a range of academic vocabulary in context; 

• interpret and use metaphor and idiom, and perceive connotation, word 
play and ambiguity; 


76 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


• understand relations between different parts of a text, be aware of the 
logical development of (an academic) text, via introductions to conclu- 
sions, and know how to use language that serves to make the different 
parts of a text hang together; 

• interpret different kinds of text type (genre), and show sensitivity for the 
meaning that they convey, and the audience that they are aimed at; 

• interpret, use and produce information presented in graphic or visual 
format; 

• make distinctions between essential and non-essential information, fact 
and opinion, propositions and arguments; distinguish between cause and 
effect, classify, categorise and handle data that make comparisons; 

• see sequence and order, do simple numerical estimations and computa- 
tions that are relevant to academic information, that allow comparisons to 
be made, and can be applied for the purposes of an argument; 

• know what counts as evidence for an argument, extrapolate from informa- 
tion by making inferences, and apply the information or its implications 
to other cases than the one at hand; 

• understand the communicative function of various ways of expression in 
academic language (such as defining, providing examples, arguing); and 

• make meaning (e.g. of an academic text) beyond the level of the sentence. 

The critical component of the above turns out to be the ability to use language in 
academic discourse that enables one to compare and classify, categorise and make 
distinctions between essential and non-essential information. 

From an applied linguistics point of view, it should be noted that test design- 
ers use the definition of academic literacy given above as a justification for the 
various task types that are included in the test, and as a rationale for why the test 
is made up in a certain way. Therefore the first draft of TALPS is made up of the 
following task types: scrambled text; interpreting and understanding visual and 
graphic information; dictionary definitions, register and text type; understand- 
ing texts (longer reading passages); academic vocabulary; text editing and gram- 
mar and text relations. The justification for using the above mentioned text types 
is normally further articulated in the form of a set of specifications for each text 
type. Van Dyk and Weideman (2004b) present a table in which each task type is 
matched with a component or components of the construct: 


Ensovoort: jaargang 1 1, nommer 2, 2007 


77 


Table 1: Specificationsandtasktypes: TALL 


Specification 

(component of construct) 

Task type(s) measuring / potentially measuring 
this component 

Vocabulary comprehension 

Vocabulary knowledge test, longer reading passages, 

text editing 

Understanding metaphor & idiom 

Longer reading passages 

Textuality (cohesion and grammar) 

Scrambled text, text editing, (perhaps) register and text type, 
longer reading passages, academic writing tasks 

Understanding text type (genre) 

Register and text type, interpreting and understanding visual 
and graphic information, scrambled text, text editing, longer 
reading passages, academic writing tasks 

Understanding visual & graphic 

information 

Interpreting and understanding visual and graphic information, 
(potentially) longer reading passages 

Distinguishing essential/non-essential 

Longer reading passages, interpreting and understanding visual 
and graphic information, academic writing tasks 

Numerical computation 

Interpreting and understanding visual and graphic information, 
longer reading passages 

Extrapolation and application 

Longer reading passages, academic writing tasks, (potentially) 
interpreting and understanding visual and graphic information 

Communicative function 

Longer reading passages, (possibly also) text editing, 

scrambled text 

Making meaning beyond the sentence 

Longer reading passages, register and text type, scrambled text, 

interpreting and understanding visual and graphic information 


The following table illustrates the selected task types as well as the marks allocat- 
ed to each task type for the first two drafts of TALPS. In comparison with the task 
types selected for TALL, two additional task types were selected, i.e. dictionary 
definitions and grammar and text relations. 


78 


Ensovoort: jaargang 1 1, nommer 2, 2007 


Table 2: Task types and marks allocated to them in two TALPS draft versions 


Task type 

Marks (First draft) 

Marks (Second draft; pilot) 

Scrambled text 

15 

5 

Graphic and visual literacy 

16 

16 

Dictionary definitions 

5 

5 

Academic vocabulary 

40 

27 

Text type 

5 

5 

Understanding texts 

60 

60 

Grammar and text relations 

22 

22 

Text edit 

10 

10 

Total 

173 

150 


As can be seen from the table above there is but a slight difference between the 
first and the second draft of TALPS as far as the marks allocated to the different 
task types are concerned. This happened as a result of the number of items that 
were reduced for two of the task types, and was the first attempt to shorten TALPS. 

In principle, a test cannot be valid unless it is reliable. (Alderson et al. 1995:18 7). 
For a test designer to develop a fair test, different kinds of validity are not the only 
important factors. The test also requires reliability. Test makers therefore focus on 
the internal reliability or consistency of a test. We have noted one such measure in 
the factor analysis (Figure 1 above), which shows the extent to which a test is 
heterogeneous or homogeneous in what it measures. The reliability of a test is 
usually expressed in terms of such statistical measures. In the case of the intemal 
reliability of a test, i.e. its consistency across all items in the test, this statistical 
measure is generally expressed in terms of an index (from 0 to 1) called alpha. 
Such a reliability index gives an indication of how internally consistent a test is. 
The alpha for high stakes tests is considered to be at least 0.6, but preferably 
higher than 0.7. Although we do not yet know whether TALPS will be employed 
as a high or medium stakes test, we are pleased with the reliability of the pilots of 
the test as a whole, and certain components of it, which we have so far tested: 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


79 


Table 3: TALPS: Reliability measures 


Date and component of the test 

Alpha 

May 2007 (longer draft) 

0.85 

May 2007 (Graphic & visual literacy) 

0.69 

May 2007 (Text type & text edit) 

0.69 

May 2007 (Scrambled text) 

0.78 

Average 

0.75 


In addition to being as reliable as one would wish for, for such shorter tests (of 
between 10 and 15 marks, for the latter three mentioned above), there is also 
evidence that the spread of marks achieved, i.e. the way that the test manages to 
discriminate between testees, is highly satisfactory. Compare (Figure 2) the way 
in which the TALPS visual and graphic literacy component test pilot discrimi- 
nated among the 551 testees to which it was administered, in the following score 
distribution table generated by means of an Iteman analysis of the results: 


Number 

correct 

Frequency 

Cum 

frequency 

PR 

PCT 


0 

1 

1 

1 

0 

+ 

1 

1 

2 

1 

0 

1 

2 

6 

8 

1 

1 

1 # 

3 

11 

19 

3 

2 

| ## 

4 

32 

51 

9 

6 

| ###### 

5 

48 

99 

18 

9 

+######### 

6 

53 

152 

28 

10 

| ########## 

7 

56 

208 

38 

10 

| ########## 

8 

58 

266 

48 

11 

| ########### 

9 

80 

346 

63 

15 

| ############### 

10 

65 

411 

75 

12 

+############ 

11 

48 

459 

83 

9 

(######### 

12 

39 

498 

90 

7 

| ####### 

13 

33 

531 

96 

6 

| ###### 

14 

18 

549 

99 

3 

| ### 

15 

2 

551 

99 

0 

+ 

i 1 1 L 


5 10 15 20 25 

Percentage of examinees 

Figure 2: Score distribution table - TALPS pilot (graphic & visual literacy) 


80 


Ensovoort: ]aargang 1 1, nommer 2, 2007 


Given these indications of reliability and the discriminative power of the draft 
TALPS and some of its subcomponents, we can again ask whether a shorter test is 
a viable option. The first answer to this, as has been stated before, is that although 
a test can be valid and reliable, its utility can still be undermined if it is too long. 


5. An alternative for the conventional format 

When one has the intention of shortening a test of academic literacy levels, the 
first question that comes to mind is: Which components will one keep and which 
are going to be omitted? To establish the latter, subtest intercorrelations were 
done on the results of the 2007 administration at the University of Pretoria of our 
conventional test (TALL). The results indicate that, while the correlation between 
the various subtests and the total test were satisfactorily high (above 0.6 in all but 
two of the cases, and in the latter, <0.44), the inter-subtest correlations were (since 
each subtest hypothetically is supposed to tap into a different component of the 
construct) as desired, viz. below 0.5 in all but three of the cases. The complete set 
of inter-subtest correlations is shown in Table 4. 

Table 4 shows that the three subtests with the highest correlation are under- 
standing texts, academic vocabulary and text editing. The question now arises. 


Table 4: Subtest intercorrelations 


Total 

Subtest 

Test 

Subtest(s) 

1 

2 

3 

4 

5 

6 

Scrambled text 

1 

0.46 






Graphic & visual literacy 2 

0.67 

0.25 





Text types 

3 

0.44 

0.14 

0.24 




Understanding 

4 

0.87 

0.33 

0.55 

0.34 



Academic vocabulary 

5 

0.82 

0.29 

0.47 

0.33 

0.69 


Text editing 

6 

0.84 

0.25 

0.44 

0.26 

0.59 

0.62 

Number of testees 

3905 

3905 

3905 

3905 

3905 

3905 

3905 

Number of items 

65 

5 

8 

5 

22 

9 

16 

Average test score 

39.87 

3.24 

5.54 

2.93 

14.72 

5.41 

8.03 

Standard deviation 

13.35 

1.86 

2.19 

1.28 

4.57 

2,61 

5.15 

SEM 

3.34 

0.73 

1.11 

0.83 

1.93 

1.16 

1.44 

Average P-value 

61.33 

64.79 

69.25 

58,59 

66.90 

60.11 

50.18 

Coefficient Alpha 

0.94 

0.85 

0.74 

0.58 

0.82 

0,80 

0.92 

GLB 

0.97 

0.93 

0.81 

0.77 

0.85 

0.82 

0.96 

Asymptotic GLB 

0.96 

0.93 

0.80 

0.77 

0.84 

0.82 

0.96 


Ensovoort: jaargang 1 1, nommer2, 2007 


81 


will they be able to do the trick on their own? If scores of only these three are 
taken, there is a high correlation with the original total. However, this high cor- 
relation is to be expected, since the three mentioned subtests make up the greater 
part of the initial test. And this is exactly where the argument of using these three 
as the subtests in a shorter test runs up against a wall. 

As an alternative, to see whether this impediment can be overcome, we ex- 
plored doing partial correlations. If partial correlations are done, i.e. analyses in 
which we hold steady the influence of some parts (in this case subtests 1, 2, and 3), 
then there is indeed some relationship (some 25%) between 4, 5 and 6, the three 
subtests that could potentially constitute a shorter test (see Table 5). 


Table 5: Subtest partial correlations between subtests 4, 5 and 6 



Subtest 4 

Subtest 5 

Subtest 6 

Subtest 4 

1.00000 

0.54 

0.42 ' 



(<. 0001 ) 

(<. 0001 ) 

Subtest 5 

0.54 

1.00000 

0.48 


(<. 0001 ) 


(<. 0001 ) 

Subtest 6 

0.42 

0.48 

1.00000 


(<. 0001 ) 

(<. 0001 ) 



Pearson Partial Correlation Coefficients, (probablility) [N = 3905] 


The main problem, however, is that even these analyses are unlikely to give us 
empirical answers to the question if each subtest does not have the same weight as 
each of the others. 


6. Conclusion 

The sensible conclúsion is that one should thus either change the design of the 
test, or acknowledge that one has to make the decision on other than purely 
empirical grounds. This means, in effect, that even this kind of decision (about 
what should make up a shorter test) is, in the end, one that must be done judi- 
ciously, i.e. after weighing up various issues. Even though empirical data and 
analyses may provide some support for responsible decisions, this may not al- 
ways be the case. 

If one considers the use to which TALPS will be put, namely to determine the 
level of risk that a postgraduate student has in respect of academic literacy, a more 
productive future line of investigation may be to investigate which component(s) 


82 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


(subtest(s)) has/have the better predictive validity, i.e. correlates high or low with 
some other measure of academic success (like finishing a postgraduate degree in 
time). This kind of validation, against another, external measure, is in any event 
desirable. This is where I hope the subsequent research should, and will, take us. 


Bibliography 

Alderson, J.C., Clapham, C. & Wall, D. 1995. Language test construdion and evaluation. Cam- 
bridge: Cambridge University Press. 

Henning, G. 1987. A guide to language testing: Development, evaluation, research. Cambridge: 
Newbuiy House. 

Owen, K. 1996. Construction of tests and questionnaires: basic psychometric principles. In: 
Owen, K. & Taljaard, J.J. 1996. Handbookfor use ofpsychological and scholastic testsfor the HSRC. 
Pretoria: Human Sciences Research Council. 

Van der Slik, F. & Weideman, A. 2005. The refinement of a test of academic literacy. Per linguam 
21 (1): 23-35. 

Van Dyk, T & Weideman, A. 2004a. Switching constructs: On the selection of an appropriate 
blueprint for academic literacy assessment. SAALT Journalfor language teaching 38 (1): 1-13. 
Van Dyk, T. & Weideman, A. 2004b. Finding the right measure: From blueprint to specification 
to item type. SAALT Journalfor language teaching. 38 (1): 15-24. 

Weideman, A.J. 2003d. Assessing and developing academic literacy. Per linguam 19 (1 & 2): 55-65. 
Weideman, A.J. 2006a. Transparency and accountability in applied linguistics. SouthernAfrican 
linguistics and applied language studies 24(1 ): 71-86. 

Weideman, A.J. 2007a. Academic literacy: Prepare to learn. Second edition. Pretoria: Van Schaik. 
Weideman, A.J. 2007b. A responsible agenda for applied linguistics: Confessions of a philoso- 
pher. Keynote address, joint LSSA/SAALA/SAALT 2007 conference, Potchefstroom. 


Movingto more than editing: Standardised feedback in practice 

Henk Louw 

North-West University (Potchefstroom Campus) 


Moving to more than editing: Standardised feedback in practice 

This article reports on an experiment which tested how effectively standardised 
feedback could be used when marking L2 student writing. The experiment was 
conduded by using a custom-programmed software tool and a set of standardised 
feedback comments. The results ofthe experiment prove that standardised feedback 
can be used consistently and effectively to a degree, even though some refinements 
are still needed. Using standardised feedback in a standard marking environment 
can assist markers in raising their awareness oferrors and in more accurately 
identifying where students lack knowledge. With some refinements, it may also be 
possible to speed up the marking process. 


1 . Introduction and background to the project 

The process of providing feedback (marking) on student essays is usually very 
time-consuming. Comparing the amount of time spent on it by teachers and the 
amount of attention paid to it by students, it may be considered one of the least 
effective duties of language teachers (Moletsane, 2002:21; Hyland, 1990:282). 

In 2004 a project commenced at the North-West University to investigate the 
possibility of gettingmore "teaching" out of marking. The main objectives of the 
study were to: 

a. establish whether standardised feedback would ensure more clarity for 
the student (Louw, 2006); 

b. create a system to keep effective records of student development (cf. Wible, 
Kiu, Chien, Liu & Tsao, 2001; Louw, 2006); 

c. establish whether standardised feedback could be used consistently; 

d. establish whether standardised feedback would ensure ease of use for the 
marker; and 

e. force students to pay attention to the feedback. 

Spencer (1998) researched strategies in responding to student writing, while Wible 
et al (2001) created an electronic marking system used to keep track of student 
development. Wible et al' s (2001) system did not work with standardised feed- 
back, while Spencer found that with current working limitations, certain mark- 
ing strategies workbetter than others. The project at the North-West University 
aimed to integrate these findings into one project. The first step was to establish 
whether or not feedback could be standardised to an extent and if it would actu- 


84 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


ally benefit the student. In Lou w (2006) this was found to be the case. This article 
reports on objectives C and D as discussed above. In addition, the feasibility of the 
marking system is also addressed. 

For background purposes, a quick overview of the findings with regard to 
feedback is presented in the next section. 


2. What is effective feedback? 

To provide standardised feedback, it was firstly necessary to establish exactly 
what constitutes feedback as well as the nature of effective feedback. The different 
classifications of feedback are too numerous and intricate to discuss here (see 
Lou w, 2006) but some of the important facts about feedback can briefly be summa- 
rised as follows: 

• The interpretation and use of the concept "feedback" is closely related to 
the user's definition of and attitude to "error". 

• Feedbackis not just error correction,but any response (positive or nega- 
tive) on a student textby any reader of the text (Hyland, 2003, 1990, 1998; 
Lyster & Ranta, 1997; Askew & Lodge, 2000). Per definition then, feedback 
can be provided in many different ways. 

• Depending on the purpose or background of the 'feedback giver ', feed- 
back can be classified as performing many different functions: 

- evidence (linguists), 

- repair (discourse analysis), 

- correction (L2 teachers), and 

- focus on form (SLA researchers) (Lyster & Ranta, 1997:38). 

• Leamers expect feedback, but often neglect to look at it (Ferris, 2002:13-14). 

• There are many advantages and disadvantages to feedback, but there are 
conflicting research findings regarding these (see Ferris, 2003:127 and Louw, 
2006). 

Because there are so many overlapping and contrasting definitions of feedback 
(see Louw, 2006), the following working definitions will be used in this article: 

• An error is any instance in a text which is incorrect language use or lan- 
guage use which is not inherently wrong, but which could be improved. 

• Feedback is any reaction to a text by any reader of the text, for the purpose 
of pointing ou t errors to the writer. In keeping with the definition of "er- 
ror ", feedback could also indicate satisfaction with something in the text. 


Ensovoort: jaargang 1 1, nommer 2, 2007 


85 


3. Can feedback be standardised? 

In Louw (2006) a standardised set of feedback tags was created. 

The tag set contains a list of "popular " error tags used by markers 
(as established by research and in an experiment in Louw, 2006) 
and in corpus linguistics (Granger & Meunier, 2003). The tag set is 
constantly being updated and refined. An example of the current 
version (at the time of writing) is attached as Addendum A. An 
experiment proved that marking, using this set of standardised 
feedback comments, is more effective than the normal squiggles, 
lines, strike-throughs and question marks often used by lecturers 
(Louw, 2006). For reference purposes, these squiggles and other 
feedbackmarks are called hieroglyphics. See Figure 1 for an exam- 
ple of hieroglyphics. 

In the experiment in Louw (2006), a hieroglyphic marking tech- 
nique was pitted against standardised feedback and a blank text 
on which the number of errors were indicated, but no errors were 
marked. The results indicated that students were seldom able to 
identify errors in the blank texts, much less correctly revise them. 

On the other hand, students were able to correctly revise errors 
rnarked with the hieroglyphic feedback, but the standardised feed- 
back proved to deliver the greatest improvement in all categories 
tested. This shows that students are often able to revise errors once indicated to 
them, but in order to facilitate maximal irnprovement in writing, standardised 
feedback is more effective. 

Based on these findings, the question then arises whether standardised feed- 
back could be implemented in practice with consistency. Another experiment 
was therefore conducted to investigate the following questions: 

a Can standardised feedback be us.ed consistently by markers? 

b. Will standardised f eedback make it easier for markers to mark texts effecti vely ? 

The rest of this paper reports on this experiment. 



Figure 1: Example of hieroglyphic 
feedback 


4. Methodology 

Four markers (two experienced and two inexperienced) were instructed to mark 
a number of L2 leamer essays using the feedback tag set as tool. The essays for the 
experiment come mostly from the Tswana Learner English Corpus (TLE) (Van 
Rooy & Schaefer, 2002), with a minimal number of essays from the Afrikaans 
Learner English Corpus which is still under construction. 

The tag set was incorporated into a custom-built software package. The soft- 
ware package imported the text to be rnarked into a marking window and had the 


86 


Ensovoort: jaarcanc 1 1 , nommer 2, 2007 


entire standardised feedback tag list in a drop-down tree view on the right of the 
text (see Figure 2). When finding an error, markers had to simply highlight the 
error and click on the relevant error category. The computer would then insert 
the feedback. Markers were shown how the system worked but were not given 
any additional instruction on how to mark or on what to provide feedback. 
Additional feedback comments not covered by the tag set could also be added by 
making use of the comment box. It should be noted that the drop-down tree view 
simply indicates the "name" of the specific error tag used by the markers, and is 
not the (more complete) feedback a student would see (compare Addendum A 
and Figure 3). 


Student essay Identified error 


Drop-down tree view of 
standardised error tag set 


Relevant error tag for 
the highlighted error 


JH tssoy MiwkcrOOl.ZBU) 


f=t<ss Sbtte SeUhis Systwn Het 

.tt TSKC 138 Q.txt j 

££ tTlte rTTtTFfeaëtïog j 

iptc-adí :n iuch o bpy ú you tfikc a KxA 6» >Hc rf ihc pcozk *c cHoiy nlccicd you 

r* cc&J teóly waní lo ebítan froít» 5ex bt£-av;e teóly vhouU y<Hi hxve lex befoie tr-^nd&i? TNí t* noí 
what Gc»j u; to dp. As. a c hrêsi/n 1 wtxi’d tp thal HIV/AIOS u a PuNjhowr* cí Ihi sín oí i&t 
e tn.viaga bjí tbis i: crt\ Iron^eth'í'ítiiní ptírl ct vieyr 
T hero oro many l*:rors «uhbh BMoBS' *o»v3rdf. Ite HfV/AJDS «fr<bmc «nd ú v* lokc i l&ok ol hovj 
Peo^e Ifve tcdw, Me ií ío &ÏKd\, People <re urempfcsyed, iheie ere r>o jcb;. ihí mgW be becoure oí Ihe 
eparthe»d sysJemthaí utod to funcíoo beíoio Pioíiden* Man<fela c-ame into f*oiver. p»?coV? an 
nad<ïu4re ed»cafc« 5 n nor enough knoy/edge to tho* ihey e*» have jófe;. Ihu ií due lo Ihe fecí th»l 
ihe bf«ack tcetmtety árrrp the apaid'ieid ei a wtvere reífibted tc letfa tr« oiher te&dt ?uch *z Ei 
Medanc snó the o*hef; o»d o icíuï this ho; cau mttí povert j< m Aliico 5 hgh 

ieteidcy late if» our oonúnonf. Qbvio-rtly ih« cauie»t poverty v/hbh*jh»-í/^^ fédcr lowauh 

ihc HW/AIOS cpxteroc rvAfjco 

ll yoy cn) mêke e leseiich íri »vheie poc< peojJe iwe. ttud/ b»e condíiorr. in 
wNch ihoy Ive in mosí of Ih&U) tkdntz oics <*&>/ dic ríctmsú s#ttemer»í* íFko 

tí>e pecp4e v\ho lr«*e «n il Peopie cti «ccrret íood. íhey tícrvf eeí víerrám entí 
rp^dfcod becsrj» they cart effcrd to tuy nuifeous food or»d iFhjc fFic cwm madp ccknet; ar»j í 
you have th»; HfV/AJO vjus you cavvy feed >ou;dí property r> Ofde? lo Ive tonga 
Scn» pecpfe who éfê mak'mQ a Kuo out of sefina ther boáet i e Pro«-ic ufcs «rid m thie butrnotó ih«e it 
no se/dy bcc*yjsc you con co«V tmc ihc vny; ond bccomc ir/cdcd fcy sfccpmg wfh jcmecoc u 
dit'sidy mfected 



zi 


Studenl Víew 


Zi 


Word choce obcojnng h 
Wordfcrm wcog 
Wrong Time/lempcíeJ^dvetbwl 
Wrorij wortí 
8 Mcrphisíoay 
Aipect c no< 

Omrrebnpljr^ rrwck» 

Tonso 

Wrong ïoim-petf partvcípfc 
Wrong foim- «cfcctrre 
Wrong íoim- Pretenf Peiicbte 
3 Syhtax 

Ncgobon irocMrecf 
Omrííonfc» punchjobon 
Omiííbngeneial 
Omissbri PreposfccH 
Omú»br*teme mrykcr 
Omiííionvctb 
Supe;IIjOJt geneed 
Servtenoe incanctete 
Suf»;Mus plual mtkm 
Urr>ece;;«y picncun 
Wofddder 
A/rbouCwJ word order 
PttQQphot) UnrK!05C^;y 
Supc*rtjoj; lemc mvkes 


±1^ 


\ 




; 0 & U ® ijjTbwy 


Figure 2: Essay marker screenshot 


V&»«S© 3Q‘0 , 2IMN o?.-» 



Comment box in case markers 
need to provide feedback not in 
the standardised tag set 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


87 



Student errors are 
highlighted in colour 


When a student moves 
his / her mouse over the 
highlighted area, a pop-up 
block explains the error 


Figure 3: Student view illustrating how students will receive feedback 


Figure 2 illustrates the view of the marking systern as the rnarker would see it. In 
this figure, the marker has identified a concord error and highlighted the error. 
The marker has also identified the relevant error tag (in this instance "concord") 
in the tree view on the right hand side of the screen. 

Figure 3 illustrates how the student would receive this specific feedback. The 
student will get his/her text back in an HTML-file which can be opened by any 
standard web browser like tnternet Explorer or Mozilla Firefox. Errors are high- 
lighted in different colours, for example red indicates a grammar error. In this case, 
the student has moved his/her cursor over the highlighted word and got the pop- 
up message, "The form of your verb should agree with the subject it refers to". 

The marked essays were stored electronically with their error tags for analysis. 
The error tags the markers used were extracted from the database. All the markers 
did not mark an equal number of essays. To counter this problem and in order to 
compare apples with apples, the data was normalized. The number of times a tag 
was used was reworked to the number of tímes used per 1000 errors tagged. All 
numbers reported in this paper therefore refers to the normalized totals. 

In addition to analysing the tags used, the markers were also sent a question- 
naire with 18 questions asking them about their experiences with the marking 
system. These questions are included in Addendum B. 


88 


Ensovoort: jaargang 1 1, nommer 2, 2007 


The research aimed to answer two questions: 

1. Can standardised feedback be used consistently? 

2 Is the system easy to use for the markers and if not, how can it be im- 
proved? 

To answer question one, two types of analysis were done on the marked essays. 

• An analysis of all the tags was done to establish marker tendencies. 

• A close analysis of the way the markers used the tag "better word" was 
conducted. (The tag was chosen because all four of the markers used it as 
one of their top ten favourite tags.) 

The answers to the questionnaire were also used to judge the consistency with 
which the markers marked. To answer question two regarding the ease of use of 
the marker system, the answers provided on the questionnaire were used. 


5. Results: can standardised feedback be used consistently? 

Marker tendencies 

Table 1 indicates the top 20 tags used by the markers. The first three columns 
identify and explain the error tag (see Addendum A for additional clarification), 
while 'Knorm', 'Mnorm', 'Pnorm' and 'Tnorm' reveal the number of times the 
specific tag was used by the different markers. The column "normed total" indi- 
cates the total number of times a tag had been used out of a total of 4000 marked 
errors (see Table 1). 

With regard to the top 20 tags shown in Table 1, the following points are 
interesting to note: 

1. Four of the top 20 are errors which are only present in writing: Punctua- 
tion wrong, Punctuation missing, Capitalization, Spelling/typing error. 
These are surface elements errors only. 

2. Seven errors have to do with lexis: Wrong word, Better word, Word form 
wrong, Article missing, Preposition wrong, Word choice obscuring mean- 
ing, Determiner incorrect. 

3. Three error tags have to do with morphology: Concord, Tense, Omission 
plural marker. 

4. Three errors have to do with syntax: Superfluous general, Omission gen- 
eral, Omission verb. 

5. Only two error tags deal with coherence or cohesion and both of these are 
coherence on a small scale - within paragraphs or within sentences. The 
relevant tags are: Reasoning inconclusive and Reference vague. 


Ehsovoort: jaargang 1 1 , nommer 2, 2007 


89 


Tabie 1 : Top 20 tags used 


Subordinate 

Domain 

Set 

(error tag) 

Knorm 

Mnorm 

Pnorm 

Tnorm 

Normed 

Total 

1 Presentation 

Spelling 

Spelling / 
typing error 

105 

92 

150 

62 

409 

2 Grammar 

Lexis 

Wrong word 

92 

40 

87 

47 

266 

3 Presentation 

Capitalization 

Capitalization 

18 

130 

33 

25 

206 

4 Grammar 

Syntax 

Superfluous 

general 

53 

35 

26 

59 

173 

5 Grammar 

Morphology 

Concord 

19 

46 

79 

25 

169 

6 Presentation 

Punctuation 

Punctuation 

missing 

23 

44 

47 

51 

166 

7 Grammar 

Lexis 

Better word 

8 

16 

81 

34 

139 

8 Grammar 

Syntax 

Omission 

general 

40 

26 

17 

32 

115 

9 Grammar 

Morphology 

Tense 

42 

29 

5 

38 

114 

10 Discourse 

Coherence 

Reasoning 

inconclusive 

73 

2 

8 

30 

113 

11 Grammar 

Lexis 

Word form 
wrong 

21 

36 

40 

16 

112 

12 Grammar 

Morphology 

Omission 
plural marker 

14 

61 

6 

29 

109 

13 Grammar 

Lexis 

Article missing 

2 

62 

7 

19 

90 

14 Grammar 

Lexis 

Preposition 

wrong 

15 

16 

35 

23 

89 

15 Discourse 

Style 

Sentence 

vague 

53 

2 

23 

10 

88 

16 Grammar 

Lexis 

Determiner 

incorrect 

18 

39 

2 

26 

85 

17 Discourse 

Factual 

correctness 

Facts wrong 

14 

2 

8 

44 

69 

18 Presentation 

Punctuation 

Punctuation 

wrong 

25 

5 

28 

10 

68 

19 Discourse 

Cohesion 

Reference 

vague 

9 

18 

0 

33 

61 

20 Grammar 

Lexis 

Word choice 

obscuring 

meaning 

15 

28 

9 

7 

59 

20 Grammar 

Syntax 

Omission verb 

4 

31 

10 

14 

59 


90 


Ensovoort: jaargang 1 1 , NOMMER 2, 2007 


The results indicate that the markers did not only focus on surface elements. 
Style, coherence and the accuracy of facts feature, but only on a small scale - 
within the paragraph or sentence level. These results are the same for all four 
markers. Although it is not ideal to focus on surface structure errors, these results 
are no different from those found in previous studies (see Louw, 2006:103). 

One can deduce that: 

a. Markers were relatively consistent in focusing more on surface level er- 
rors, even though they did not actively work together and even though 
they focused on different surface level errors. 

b. Errors other than grammar, spelling and punctuation are markedly more 
difficult to identify. 

c. Surface structure errors bother markers - to such an extent that markers 
may even ignore other errors. Admittedly, there are so many errors in some 
of the sample essays that it gets extremely difficult to mark for argument. 
See example texts one and two in Addendum B. 

d. Writing seems to be an effective way to notice a poor lexicon. 

Marker personal favourites 

For purposes of comparison, the inter-marker consistency (top ten used tags) are 
included in Addendum D. In Table 2 the tags which occurred in all markers' top 
ten, are presented. A count of four therefore indicates that all four markers used 
the tag as one of their most frequently used tags, while a count of three indicates 
that three of the four markers used the tag as one of their most frequently used 
tags. 


Table 2: Tags that occurred in makers’ personal favourites 
Error tag Occurrence in top ten of marker favourites 


Punctuation missing 
Spelling/typing error 
Wrong word 
Superfluous general 
Word form wrong 
Better word 
Capitalization 
Concord 

Omission general 
Punctuation wrong 


Tense 


2 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


91 


There is very low inter-marker consistency evident here. Although it can be ar- 
gued that all markers should have rnarked the exact same essays, the focus of the 
experiment was not just on consistency, but also on ease of use for the marker. A 
broader spectrum of essays to mark generated a broader spectrum of possible uses 
of the tag set and hence a more thorough test of the marking system. In addition, 
these essays were all written by students of more or less the same competency, so 
the comparison (although not perfect) can still be seen as legitimate. 

There are only three tags that occur in all four markers' top ten. These are 
"punctuation missing", "spelling/typing error " and "wrong word". The only tag 
to occur in three of the four top ten lists is "superfluous general" which is used to 
indicate superfluous words. 

If markers have to constantly focus on incorrect word choice, punctuation, 
overuse of words and incorrect spelling, it indicate two possibly interacting issues: 

1. Students have a very poor ability to make themselves understood, which 
forces markers to indicate these errors in an attempt to point out that they 
were unable to understand the text. 

2. Markers are overly finical with regard to surface level errors or simply find 
it easier to comment on those. 

Leastused tags 

Table 3 presents the tags least used by the markers. Although these tags were 
available, markers tended not to use them a lot. 

Regarding the least used tags, the following issues appear to be significant: 

a. Five of the six tags in the dornain "structure" occur in the least used tags. 
Markers therefore seldom touched upon the issue of paragraph structure. 
The domain "structure" falls under errors of discourse and refers mainly 
to errors concerned with paragraphing. 

b. Positive comments are also amongst the least used tags with only two of 
the markers ever using positive comments and then extremely sparingly. 

c. The rest of the least used tags are issues of grarnmar that are either low 
frequency ("superfluous tense marker ") or more difficult to identify ("as- 
pecterror"). 

The results indicate that the markers still sometimes have editing or spell check- 
ing in mind when marking student texts. However, one should keep in mind the 
virtual incomprehensibility of some of the student texts (see addendum C). With 
some of these texts it would be very difficult to comrnent on structure since it is 
difficult to understand the text in the first place. Most of the texts are not that 
difficult to comprehend though, so in spite of some difficulties the question 
remains: How can markers be assisted to be more than spell checkers who simply 
look for instances of surface level incorrectness? 


92 


Ensovoort: iaargang 1 1, nommer 2, 2007 


Table 3: Least used tags 


Subordinate 

Domain 

Set 

Knorm 

Mnorm 

Pnorm 

Tnorm 

Normed Total 

Discourse 

Style 

Active voice 

0 

0 

0 

1 

1 

Grammar 

Syntax 

Superfluous tense marker 

1 

0 

0 

0 

1 

Grammar 

Syntax 

Omission tense marker 

0 

0 

0 

1 

1 

Grammar 

Syntax 

Preposition unnecessary 

0 

0 

0 

2 

2 

Discourse 

Style 

Passive voice 

0 

0 

0 

2 

2 

Discourse 

Positive 

comments 

Good reasoning 

2 

0 

0 

0 

2 

Grammar 

Morphology 

Wrong form - adjective 

2 

0 

0 

0 

2 

Grammar 

Syntax 

Negation incorrect 

1 

0 

1 

0 

2 

Discourse 

Structure 

Paragraph jumbled 

1 

1 

0 

0 

3 

Discourse 

Style 

Construction overuse 

0 

1 

0 

2 

3 

Grammar 

Lexis 

Quantifier error 

1 

0 

2 

0 

3 

Discourse 

Structure 

Paragraphing: 
relate or move 

0 

0 

3 

2 

5 

Discourse 

Style 

Gender Bias 

0 

0 

0 

5 

5 

Grammar 

Lexis 

Wrong time / 
temporal adverbial 

0 

3 

2 

0 

5 

Discourse 

Factual 

correctness 

Unsupported argument 

0 

0 

0 

6 

6 

Grammar 

Lexis 

False friend 

3 

0 

2 

1 

6 

Grammar 

Lexis 

Problem with conditional 

2 

0 

3 

1 

7 

Presentation 

Layout 

Layout inhibits reading 

0 

0 

7 

0 

7 

Discourse 

Structure 

Introduction weak 

0 

0 

4 

3 

8 

Discourse 

Structure 

Paragraph: Start new 

3 

0 

3 

2 

8 

Grammar 

Morphology 

Wrong form 
- past participle 

2 

2 

2 

3 

9 

Discourse 

Style 

Register too formal 

8 

0 

0 

1 

9 

Discourse 

Positive 

comments 

Interesting point 

8 

0 

2 

0 

10 

Grammar 

Morphology 

Aspect error 

5 

0 

5 

0 

10 

Discourse 

Structure 

Paragraph: weak 
opening sentence 

3 

4 

0 

3 

10 

Grammar 

Syntax 

Preposition unnecessary 

2 

4 

0 

5 

11 

Grammar 

Syntax 

Sentence incomplete 

0 

1 

10 

1 

12 

Grammar 

Lexis 

Pronoun wrong 

8 

0 

1 

6 

15 

Grammar 

Lexis 

Inappropriate word 

13 

0 

0 

2 

15 

Discourse 

Style 

Verbosity 

1 

2 

2 

13 

18 


Ensovoort: jaargang 1 1 , nommer 2, 2007 9 3 


Subordinate 

Domain 

Set 

Knorm 

Mnorm 

Pnorm 

Tnorm 

Normed Total 

Discourse 

Factual 

correctness 

Opinion 

3 

2 

3 

11 

20 

Presentation 

Punctuation 

Apostrophe error 

1 

12 

0 

7 

20 

Discourse 

Factual 

Reference omitted / 

0 

0 

18 

1 

20 


correctness 

wrong 






Grammar 

Syntax 

Unnecessary pronoun 

7 

7 

0 

7 

21 

Discourse 

Factual 

correctness 

Unbalanced statement 

7 

0 

1 

14 

22 

Grammar 

Lexis 

Wrong modal 

4 

2 

2 

15 

23 

Grammar 

Morphology 

Wrong form 
- present participle 

8 

9 

5 

3 

25 

Grammar 

Syntax 

Omission for punctuation 

0 

15 

3 

9 

27 

Discourse 

Coherence 

Inconsistency 

12 

0 

9 

6 

27 


Close analysis ofthe use ofone tag 

The tag "wrong word" was selected for a closer analysis since all four markers had 
it in their list of top ten used tags. The results indicate that intra-marker consisten- 
cy was relatively good, while inter-marker consistency needs some work. Mark- 
ers will need some training in order to be consistent with one another, or will need 
to work together more closely. The close analysis of this specific feedback tag 
highlighted the followingproblems: 

(i) Favourite generics: Markers tended to use the tag as a generic term 
instead of using more specific available tags. 

(ii) Doubles: In some instances more than one tag may apply, but rnarkers 
reverted to tlae one they have used previously. 

(iii) Personal preference resulted in tags of "wrong word" where "better word" 
would have been a better option. 

(iv) Incorrectly tagged. 

(v) Errors that were difficult to classify. 

Each of these will be discussed briefly: 

(i) Markers tended to use the tag as generic term instead of using more 
specific available tags. 

Example 1 : They must go to the streets to beg for < Wrong word > 
money to eat < /Wrong word > . 


94 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


In example 1, the tag"Sentence Ambiguity" could have worked better. "They" do 
go to the streets to beg for money to buyfood to eat. The problem therefore lies 
much more with the sentence construction than with the word choice. 

Example2: ... an <Wrong word>infinitive<AVrong word> tapestry 
of green maize and yellow sunflower 

In example 2, the word "infinitive" should be "infinite". A better tag would there- 
fore have been "word form wrong". It is therefore a morphological error rather 
than a lexical or semantic error. On the other hand, one can make an argument for 
the tag "better word" also in order to use "seemingly endless" or "never-ending 77 
if the idea of an "infinite" farm proves problematic. Part of the problem therefore 
lies in the interpretation of the error. 

The problem of using a favourite tag as a generic tag can be overcome with 
some collaboration between the markers and more specialized training of the 
markers. Presumably as markers get more used to the system and get to know the 
tags better, they will be more a ware of additional tags they can use instead of their 
favourite generic tag. The process of providing feedback continuously on multi- 
ple drafts written by students, could also assist in this. 

(ii) Doubles: More than one possible tag. 

Example3: . . . what they do to keep <Wrong word>this</Wrong 
word> <Word form wrong>tradition/s<AVord form wrong> from... 
In example 3, the difficulty lies therein that there are multiple possible ways to 
correct this sentence. Should itbe "these traditions" or "this tradition"? The con- 
text will normally dictate the answer. In this specific case, however, one can ask if 
the problem is a "wrong word", a "wrong form" or an "omission plural marker". 

Another example of multiple possible interpretations of an error is the word 
"irregardless" as used in one of the essays. The dictionary classifies it as informal 
so it is a question of style as well as an issue of "wrong word". The tag "better 
word" could work as well. 

A similar occurrence of double errors is with run-on sentences. Do you classi- 
fy it as "omission punctuation" or "run-on sentence"? The initial idea was to 
simply tag double errors with both applicable tags, but due to a technical limita- 
tion in the marking program, that was not possible. With the system as it is now, 
the question facing the marker is which tag to use. It seems markers normally 
opted for their "generic favourite". 

(iii) Personal preferences 

Example 4: In this essay <Wrong word>one</Wrong word> isgoing 
totry to <Wrong word>prove</Wrong word> thatthe prison system 
is outdated 


Ensovoort: jaargang 1 1, nommer 2, 2007 


95 


In example 4, the word "prove" could also be tagged as "better word" if the 
marker was of the opinion that "argue", "show", or "demonstrate" would have 
been a better choice. This is, however, a harsh judgement by the marker which is 
clearly indicative of a personal preference. 

(iv) Incorrectly tagged 

Example 5: . . . how the Romans whipped and < Wrong word > 
cruxivied</Wrong word> them ... 

The word "cruxivied" in example 5 is definitely not a wrong word. It is the correct 
word that had been spelled incorrectly so the tag should have been "spelling/ 
typing error". The student could now be under the impression that "crucified" is 
not the correct word to use and will struggle in vain to find the "correct" word. 

Example 6: Every bank has different options regarding a savings 
account to < Wrong word > consider. </Wrong word > 

In example 6, the word "consider" is not such a big problem as some omitted 
words: "Every bank has different options regarding a savings account one has to 
consider," is a possible correction. 

Example 7: We all know this catchy tune and I am sure a lot of us 
actuallygave it some <Wrong word>tough</Wrong word> one 
time or the other. 

This is a spelling error in example 7. The student meant "thought". Because of the 
incorrect tag the student stHl does not know that he/she simply misspelled "thought" 
but is instead under the impression that "thought" is the wrong word to use. 

(v) Errors that were difficult to classify 

Some errors were difficult to classify because it was not immediately evident 
what the learner wanted to say. It is only possible to tag an error once you can 
establish what the intended meaning was. This is the case especially when learn- 
ers write very long "sentences" without any verbs or punctuation. Example 8 
illustrates this problem. 

Example 8: Compulsory modules some of them are good but some of 
the they are full of nonsense because we gain nothing from them while 
others are very good because they prepare us for the working 
enviroment and also to become good professionals. Modules like 
sociology especially if you want to do community work is great and 
also koms because we learn how to communicate or improve our 
communication skills since were are preparing to be professionals. 
Modules like entr and wtll I don't really know why we should do them 
cause according to me is total waste of time and money. Any way we 
just have to do them because we don't have choice at all. 


96 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


Consistency: Marker comments 

The questions asked to the markers brought the following to light: 

• Only one marker was able to correctly identify the tag he/she used most. 
The others indicated error tags that were not even in their top ten. This 
indicates that the markers were often not consciously aware of what they 
focused on even though they were under the impression that they paid 
attention to more than "editing errors". However, one should keep in mind 
that three of the four markers indicated they often had difficulty in under- 
standing what exactly the students intended to say. This makes it almost 
understandable that they focused on the surface level errors instead. 

• All the markers indicated that they found the students' ability to present 
an effective argument underdeveloped. One marker explained, "Some- 
times students showed great insight and had impressive ideas, but they 
were unable to incorporate them into the argument. Usually any insight 
was lost in a sea of words." 

• On the other hand, one marker indicated that he/she consciously decided 
to ignore spelling or typing errors since he/she found the other errors of 
more importance. From these comments, it seems that the marking system 
should also have been tested on texts other than the Tswana Learner Cor- 
pus. One may speculate that if the markers are able to understand the text 
better, they will mark it with more care and more comprehensively. This 
will hopefully also result in greater consistency. 

• As far as consistency is concerned, all the markers turned out to be more 
than just spell checkers, but there were differences in what they focused 
on. This problem can probably be surmounted if the system is used as part 
of a well structured writing lesson where all the markers know what the 
aim of the exercise is, and therefore focused on the same issues in unison. 

Ease ofusefor the marker 

An integral part of the project to provide better feedback was to try and make it 
easier and faster for the marker to provide more (and more thorough) feedback. 
Unfortunately the software used for this experiment was a prototype version and 
contained many bugs and system limitations which hampered the process. 

Despite the bugs, markers indicated that they could get a turn of speed from 
the system, especially once they got to know the tag set. Unfortunately, it still took 
between 10 and 30 minutes per 500 word essay. Before any rash judgements on the 
time effectiveness of the system can be made, it has to be compared to normal 
manual marking. Even if it tums out that it takes just as long to mark an essay with 
the marking system as with traditional manual marking, the system has more 
advantages than traditional marking. 


Ensovoort: jaargang 1 1, nommer 2, 2007 


97 


It seems that the main reasons for the slow marking are the following: 

1. Bugs and limitations in the system. 

2 It takes a while to classify an error; a simplified tag set (less elaborate) may 
streamline the marking. A balance needs to be found between the explicit- 
ness of the feedback and the value students get from it. 

3. The markers were not used to read text on a computer screen, slowing 
their reading speed. 

Solving these problems will speed up the marking. In addition, the following 
plug-ins are being considered for the system, which could greatly assist markers 
in speeding up their marking process: 

1. A custom spelling and grammar checker that can identify and mark sur- 
face errors before the teacher even gets the text. 

2 A "focus" function which only allows the teacher to use specific tags ena- 
bling him/her to focus on certain aspects at a time. 

3. A teacher prompt function reminding the teacher to use a greater variety 
oftags. 

4. A voice prompt enabling markers to use their voice to insert an error tag 
instead of clicking with their mouse. 

Unfortunately, these possible solutions are time-consuming and expensive to 
develop, test and implement. 


6. Conclusion 

This paper commenced with the questions of whether standardised feedback 
could be used consistently and if it would make the life of the marker easier. The 
answer is that although there is still plenty of work to do, the initial findings 
provide plenty of positives. The system still needs a lot of refinement. 

The first testing of the system indicated that it has advantages: 

1. It can assist markers in raising their awareness of errors. 

2 A regular analysis of the used error tags could assist the teacher in identi- 
fying where students lack knowledge. 

3. A regular analysis of the error tags used could assist the teachers in identi- 
fying where they are overly sensitive to a specific error, or fail to pay atten- 
tion to important errors. 

In addition, the experiment emphasized the following problems: 

1. It is difficult to provide effective feedback since it entails tiring thought 
processes and error analysis. 


98 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


2. Detailed feedback on surface levels errors is possible in a standardised 
way, but markers will need some assistance to consciously move away 
from merely editing students' work. 

3. Some errors can be classified in more than one way. This makes it difficult 
for markers to be consistent in how they mark. The problem may be solved 
with adaptations to the error tag set. 

4. At present, feedback remains a time-consuming activity. 

5. One standardised set of feedback tags does not seem to be useful for differ- 
ent levels of students, since the weaker students make so many surface 
level errors that the text is difficult to mark. 

Addressing these problems will still require lengthy research and plenty of com- 
puter programming, but at least a start has been made and the data shows it to be 
a start in the right direction. 


Author's note 

The finanrial support of the National Research Foundation (grant number FA2004043000051 ) is 
gratefully acknowledged. All opinions expressed in this article are my own, and should notbe 
attributed to the National Research Foundation. The author also wishes to thank Professor Bertus 
van Rooy for his assistance as well as three anonymous reviewers for their comments. 


Bibliography 

Askew, S. & Lodge, C. 2000. Gif ts, ping-pong and loops - linking feedback and learning. In: 

Askew, S. (ed). Feedbackfor leaming. London: RouledgeFakner. p.1-17 
Ferris, D. 2003. Responding to writing. In: Kroll, B. (ed). Exploring the dynamics ofsecond language 
writing. CambridgeUniversityPress. p.93-114 

Granger, S. & Meunier, F. 2003. Error tagging project - revised guidelines. (Unpublished document). 
Hyland, F. 2003. Focusing on form: student engagement with teacher feedback System 31 :217-230. 
Hyland, F. 1998. The impact of teacher written feedbackon individual writers. Journal ofSecond 
Language Writing. (7) 3: 255-286. 

Hyland, K. 1990. Providing productive feedback. EET Journal. 44(4):279-285 
Louw, H. 2006. Standardisingwrittenfeedbackon L2 student writing. Potchefstroom: North-West 
University. (Unpubkshed dissertation-M.A.). 

Lyster, R. & Ranta, L. 1997. Corrective feedback and learner uptake. SSLA 20 37-66. 

Moletsane, J.R. 2002. Selective error correction in ESL narrative compositions. Potchefstroom: 

PU for CHE. (Unpubkshed mini-dissertation - M. A.). 

Spencer, B. 1998. Responding to student writing: Strategies for a distance-teaching context. 

Pretoria: University of South Africa. (Unpublished thesis - D.Litt.) 

Van Rooy, B. & Schaëfer, L. 2002. The effect of leamer errors on POS tag errors during automatic 
POS tagging. Southern African Linguistics and Applied Language Studies, 20: 325-35. 

Wible, D. Kuo, C., Chien, F., Liu, A. & Tsao, N. 2001. A Web-based EFL writing environment: 
Integrating information for leamers, teachers, and researchers. Computers &education, 
37:297-315 


Ensovoort: iaargang 1 1 , nommer 2, 2007 


99 


Addendum A: Extract from tag set 

Please note that the full tag set could not be included due to space constraints. 


Superordinate 

Domain 

Tag 

Feedback 

Example/explanation 

Xml label 

Grammar 

Lexis 

Repetition 

You use the same 
words repeatedly. 

Find different words 
that may convey 
your message more 
clearly. 

Use this tag when you realise that a student 
keeps on using the same word. E.g.: If a student 
use the word “Good” to mean “excellent" and 
“strong" and “hard" and “pretty" etc. This will be 
context sensitive. If you have to use the 
“Word: better word" tag a lot for the same word, 
rather start using the “Word: repetition" tag. 

GLRE 

Grammar 

Lexis 

Word 

choice 

obscuring 

meaning 

This word is not clear 
enough. Find a better 
word to say what you 
want to say. 

Use this when another word would make the 
intended meaning much clearer. E.g.: “Only third 
year students were able / allowed to go." All 
were able to go, but all were not allowed to go. 

GLWC 

Grammar 

Lexis 

Word 

form 

wrong 

This word should 

have been in a 

different form 

for this context. 

Use this for words in the wrong form, not covered 
by the other labels below. 

GLWF 

Discourse 

Style 

Active 

voice 

The passive voice 
might be more 
appropriate here. 

Use this where the student used the active voice, 
but in your opinion, the passive voice would have 
worked better. 

DSAV 

Grammar 

Morphology 

Wrong 

form 

- present 
participle 

The wrong form 
of the word. Use 
the “-ing" form 
of the word. 

1 am busy work in the garden. 

GMWO 

Grammar 

Morphology 

Wrong 
form - 
adjective 

Wrong word form. 

Use the correct 

form of the word. 

Use this when the learner should have used the 
adjective form of the word, e.g. “He gave me 
rot apples." 

GMWF 

Grammar 

Lexis 

Wrong 

Time / 

temporal 

adverbial 

This time-word 

does not fit the 
rest of the essay. 

Use this when a student uses e.g. a word in the 
past-tense when the whole essay is written in the 
present tense. 

GLWT 

Grammar 

Lexis 

Wrong 

word 

This is the wrong 
word. Find and 

use the correct 

word for the context. 

Use this when a student should have used another 
word instead e.g. “Students should be leamed 
(taught) to ..." or “injury (damage) to property." 
Property cannot hurt. 

GLWW 


100 


Ensovoort: jaargang 1 1, nommer 2, 2007 


Addendum B: Questions to markers who used the marking system 

Please answer the following questions regarding your experience using the mark- 

ing system last year. You may answer in the document and just email it back. 

Don't be shy to make positive or negative comments. 

In the questions, I distinguish between: 

A) System: the computer program. 

B) Error tags: the error categories (the buttons you used and the list of catego- 
ries you had.) 

1 What is your definition of "error "? 

2 Which error tag do you think you used the most? 

3 Which errors were the most difficult to identify? 

4 Were you pressed for time when marking these essays? 

5 How much longer do you think it took you to mark an essay with the system 
than without it? 

6 Can you read a text and ignore (or fail to notice) spelling errors or grammat- 
ical errors? 

7 What was your overall impression of the quality of the students' writing? 

8 What is your overall impression of the students' ability to present an effec- 
tive and clear argurnent in their texts? 

9 Do you read for spelling and grammar errors separately from reading for the 
argument in a text or do you pay attention to both at the same time? 

10 Were there any error tags in the system you did not understand or did not 
know how to use? 

11 Do you think the tags available in the system raised your awareness of possi- 
ble errors? If so, please give an example. 

12 What were the most common errors students made in their writing? 

13 How often would you say was it dif ficult to decide which tag to use? Give an 
example if you can. 

14 How often did you use the system? Did you use it regularly or now and then 
for abigbatch. 

15 Did it get easier to use the system after using it for a while? 

16 What are your recommendations to improve the error tag set? 

17 What are your recornmendations to improve the marking system? 

18 Any other comments you wish to make? 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


101 


Addendum C: Examples ofstudent writing 

1. Compulsory modules some of them are good but some of the they are full of 
nonsense because we gain nothing from them while others are very good 
because they prepare us for the working enviroment and also to become good 
professionals. Modules like sodology especially if you want to do community 
work is great and also koms because we learn how to communicate or im- 
prove our communication skills since were are preparing to be professionals. 
Modules like entr and wtll I don't really know why we should do them cause 
according to me is total waste of time and money. Any way we just have to do 
them because we don't have choice at all. 

2. Poverty is short fall of consumption or income if somebody can not meet the 
basic needs he or she is regarded as a poor-man. It has Found that African 
countries are under developed so is where the poverty is highly located.As 
poverty is highly concentrated in rural area, or town outskirt and women and 
children and tenagers. Aids goes hand in hand with poverty because those 
women not working had a lot of children and they are straving. So women as 
parents has to find food a clothing and school for their children therefore the 
only altarnative is to practice prostitution or forced to be married by those 
who can help them. In this way teenagers would go out with elders especica- 
ly businessmen to facilitates funds. 

3. Our South African players are not paid well they are being underpaid they 
dont get the salaries that befit their job our players are putting the country in 
to high places they are proving it to the world that they can compete with 
other strong countries Our officials must start thinking properly the high 
salaries that are being paid to the officials who are doing nothing just sitting 
the whole day in their offices and attending alot of meetings making business 
contacts for them selfs the real heroes are being underpayed the reason for our 
players to íeave the country to go and play in foreign countries is that they are 
paid well they get the money that they are playing for those the reason when 
our players are in foreign countries they play the sport with pride in them 
they get proper treatment our players must be paid correctly because the sport 
that they are playing is their career they have families to feed they are not only 
in the sport for the sake of money but because of the love for the sport they are 
proffessionals they master the sport that they are playing it is of no use repre- 
senting your country but you are not paid according to the job that you are 
doing if you do a job correctly you expect to be rewarded accordingly so our 
players are our pride and they are putting the country into greater heights so 
if they can start getting decent salaries than there wont be a need for them 


102 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


leaving the country; The reason why they are leaving the country is because 
they are offered better opportunities they make alot of money in a short space 
of time and they get alot of expoture when they are playing in foreign coun- 
tries; If they can be paid as the foreign countries are paying them than there 
wont be a need for them to leave the country given the same opportunities as 
those given to them by the foreign countries; So our officials must start think- 
ing properly and try to improve the way our players are treated by giving 
them the correct salaries or else our country will endup with no players all the 
player will leave for European countries the europeans will take all the good 
players and at the end the country will be left with nothing it will be unable 
to compete with their counterpaths because all the good player will be play- 
ing for European countries so let us start taking this thing into consoda ration 
and pay our player more so that they can stay at home and make us proud of 
them. After all "Home brewed is best". 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


103 


Addendum D 


KK normed 
number of 



Subordinate 

Domain 

Tag 

occurrences 

1 

Presentation 

Spelling 

Spelling/typing error 

105 

2 

Grammar 

Lexis 

Wrong word 

92 

3 

Discourse 

Coherence 

Reasoning inconclusive 

73 

4 

Discourse 

Style 

Sentence vague 

53 

4 

Grammar 

Syntax 

Superfluous general 

53 

6 

Discourse 

Cohesion 

Sentence cohesion 

46 

7 

Grammar 

Morphology 

Tense 

42 

8 

Grammar 

Syntax 

Omission general 

40 

9 

Presentation 

Punctuation 

Punctuation wrong 

25 

10 

Discourse 

Coherence 

Relevance to topic 

23 

10 

Presentation 

Punctuation 

Punctuation missing 

23 


K- PERSONAL FAVOURITES 




Subordinate 

Domain 

Tag 

MB normed 

number of 

occurrences 

1 

Presentation 

Capitalization 

Capitalization 

130 

2 

Presentation 

Spelling 

Spelling/typing error 

92 

3 

Grammar 

Lexis 

Article missing 

62 

4 

Grammar 

Morphology 

Omission plural marker 

61 

5 

Grammar 

Morphology 

Concord 

46 

6 

Presentation 

Punctuation 

Punctuation missing 

44 

7 

Grammar 

Lexis 

Wrong word 

40 

8 

Grammar 

Lexis 

Determiner Incorrect 

39 

9 

Grammar 

Lexis 

Word form wrong 

36 

10 

Grammar 

Syntax 

Superfluous general 

35 


MB PERSONAL FAVOURITES 


104 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


P normed 
number of 



Subordinate 

Domain 

Tag 

occurrences 

1 

Presentation 

Spelling 

Spelling/typing error 

150 

2 

Grammar 

Lexis 

Wrong word 

87 

3 

Grammar 

Lexis 

Better word 

81 

4 

Grammar 

Morphology 

Concord 

79 

5 

Presentation 

Layout 

Layout error 

57 

6 

Presentation 

Punctuation 

Punctuation missing 

47 

7 

Grammar 

Lexis 

Word form wrong 

40 

8 

Grammar 

Lexis 

Preposition wrong 

35 

9 

Presentation 

Capitalization 

Capitalization 

33 

10 

Presentation 

Punctuation 

Punctuation wrong 

28 


P PERSONAL FAVOURITES 





Subordinate 

Domain 

Tag 

T normed 

number of 

occurrences 

1 

Presentation 

Spelling 

Spelling/typing error 

62 

2 

Grammar 

Syntax 

Superfluous general 

59 

3 

Presentation 

Punctuation 

Punctuation missing 

51 

4 

Grammar 

Lexis 

Wrong word 

47 

5 

Discourse 

Factual correctness 

Facts wrong 

44 

6 

Discourse 

Style 

Register too informal 

42 

7 

Grammar 

Morphology 

Tense 

38 

8 

Grammar 

Lexis 

Better word 

34 

9 

Discourse 

Cohesion 

Reference vague 

33 

10 

Grammar 

Syntax 

Omission general 

32 


T PERSONAL FAVOURITES 




Looking into the seeds of time: Developingacademic literacy in 
high poverty schools 

Elizabeth ] Pretorius 
University of South Africa 


Ifyou can look into the seeds oftime 

And say which seeds will grow and which will not, 

Speak then to me . . . (Macbeth Act I, Scene iii) 


Looking into the seeds of time: Developing academic literacy in high poverty 
schools 

Much ofthe literature on the assessment, development and redress ofacademic 
literacy comes from the higher education sector. In contrast, this article turns an 
investigative gaze on academic literacy in three primary schools in a disadvan- 
taged area, and compares the language and reading accomplishments ofGrade 7 
learners in theirhome language, Northern Sotho, and in English, the language of 
leamingand teaching. On the basis of these findings implications are drawn about 
the development ofacademic literacy in primary schools and the dynamics of 
student unpreparedness within thebroader educational system. In conclusion it is 
argued that current learning and teaching conditions in South Africa afford a 
rather sombre view ofthe development ofacademic literacy in primary schools. 


1 . Introduction 

In one of the early scenes of the play Macbeth, Banquo and Macbeth, returning 
from battle, are met by three witches who prophesise (in ambiguous phrases, as 
witches are wont to do) what will happen to Macbeth. Curious about his own 
future, Banquo asks the witches to "look into the seeds of time/And say which 
seeds will grow and which will not" in his situation. While it was common 
during the Middle Ages for witches to foretell the future, in today's world we 
often look to scientific research for guidance as to which factors might predict 
specific behaviours or events. 

This study examines the reading accomplishment of Grade 7 learners in three 
different schools in an urban South African township to see what this reveals 
about the development of academic literacy in primary schools, and to consider 
what light the findings might shed on the phenomenon of student unprepared- 
ness currently being experienced in the higher education sector in South Africa. 
While this study cannot claim the same accuracy of prediction as that of the 
witches in Macbeth, some tentative predictions are made from the findings as to 


106 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


which academic seeds sown in primary school "are likely to grow and which 
will not". 

Background. 

In South Africa concern is being expressed over the academic preparedness of 
students entering tertiary level. Over the years, Grade 12 school leavers' results 
have become progressively variable. Although school exit scores in the top ranges 
are strongpredictors of performance at tertiary level, predictability becomes ten- 
uous with middle and lower range scores. In 2005 Higher Education South 
Africa commissioned the implementation of the National Benchmark Test 
Project 1 whose brief, basically, is to "assess students' levels of academic 
readiness ... prior to possible entry to higher education" (Griesel, 2006). 
Three aspects of academic readiness are covered by the set of national 
benchmark tests, viz. academic literacy, quantitative literacy and mathe- 
matics. 

Numerous scholars have researched the knowledge and skills that char- 
acterise academic literacy (e.g. Corson, 1997; Cummins, 1979, 1991, 2000; 
Thomas & Collier 1997; Bachman & Palmer 1996 2 ). These include factors 
such as vocabulary, grammatical insight, inferencing, distinguishing main 
from secondary ideas, perceiving relations between text entities, knowl- 
edge of genre structure and conventions, understanding visual informa- 
tion, sequencing, and so on. A combination of these abilities enables stu- 
dents to maximise learning opportunities and read to learn effectively. Mayer 
(1992) argues that academic learning is underpinned by the ability, inter alia, to 
"focus attention on relevant information, to build connections among the rele- 
vant pieces of information, and to build connections between old and new knowl- 
edge" (1992:256). Over and above these cognitive-linguistic skills and knowledge 
bases, academic literacy also entails membership of a particular group and orien- 
tation towards a set of behaviours and practices evinced by the group (Gee, 1996). 

Even though academic literacy is most often associated with secondary and 
tertiary education, academic literacy has its roots in primary school. Yet research 
into academic literacy at this level tends to be an under-researched domain with- 
in the South African context. In this article the relationship between reading, 
academic performance and academic literacy is briefly examined. The study is 
then described and the results of Grade 7 reading comprehension tests from 
different learning contexts within the same township are compared and dis- 
cussed. Finally, the implications of these findings for the development of aca- 
demic literacy in South African schools are considered. 


1 The Centre for Higher 
Education Development at 
the University of Cape Town 
has been tasked with the 
managementand develop- 
ment of this project. 


2 The academic literacy test of 
the NBTP uses as its basic 
framework the specifications 
identified by Bachman & 
Palmer 19%. 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


107 


2. Reading, academic literacy and academic performance 

In 2001 the Department of Education undertook its first large scale systemic eval- 
uation of reading and writing in Grade 3 across all nine provinces. The results 
showed a mean of 38% in the home language in Grade 3 (Department of Educa- 
tion, 2003). In 2005 the results of the systemic assessment of reading and writing in 
Grade 6 were released. Here too the results showed a national mean of 38% in the 
language of learning and teaching (LoLT): 63% of learners were found to be 
performing in the 'Not Achieved' band (Department of Education, 2005). Clearly, 
these results indicate low literacy accomplishments, yet it is on this basis that the 
subsequent development of academic literacy is supposed to be founded. 

'General ' language proficiency versus academic language proficiency 
Language development during the school years becomes increasingly differenti- 
ated according to particular demands and contexts, each of which have their 
own specific functions, linguistic registers and conventions. The differences be- 
tween spoken and written modes of communication were first articulated by 
Bemstein in the 1960s when he distinguished between restricted and elaborated 
codes of language use (Bernstein, 1966). Cummins later proposed a distinction 
between two kinds of language proficiency, based on the context and the func- 
tions that it serves, namely Basic Interpersonal Communicative Skills (BICS) and 
Cognitive Academic Language Proficiency (CALP) (Cummins, 1979). Later, Cum- 
mins emphasised the intersection of two continua, one relating to the cognitive 
demands of the task and the other to the extent of it occurring in embedded or 
reduced contexts (Cummins, 1991, 2000). 

BICS, used in everyday communication, is more context-embedded in that it 
contains paralinguistic features (gestures, intonation, facial expression, etc) to 
convey meaning and facilitate comprehension, as well as deictic items whose 
meaning can be recovered from the interactional context. Academic language 
proficiency, on the other hand, involves use of a more context reduced language 
associated with written language and with the more formal aspects of classroom 
and lecture-hall language use typical of the learning context. This is not to say 
that written language is context-free, rather that meaning is located in the text to 
a larger extent than is the case in oral discourse. It is the ability to access, under- 
stand and convey information from and into written language rather than oral 
language that accounts for success in the learning context. 

Crossing the bridgefrom oral to written language 

Academic skills in reading and writing take longer to develop than speaking and 
listening skills. All children acquire BICS in their primary language by the time 
they start school. They start acquiring academic language when they learn to 
read and write and are exposed to written forms of language. Hakuta et al.'s study 


108 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


(2000, in Cummins, 2000:58) showed that oral proficiency in an additional lan- 
guage takes 3-5 years to develop whereas academic profidency can take 4—7 years 
to develop. In addition, the processes and skills involved in the development of 
academic literacy do not all develop at the same time and are largely dependent 
on the course of reading development. 

Automaticity in reading 

Automaticity refers to the rate, fluency and accuracy with which readers recog- 
nise words and place the information in short term memory (STM) without hav- 
ing to 'sound out' words. A lack of automatidty in decoding can interfere with 
comprehension processes and make reading effortful. As Kucer (2005:107) ex- 
plains, "STM becomes overwhelmed with bits and pieces of discourse. The read- 
er is unable to make sense of the contents in STM because not enough informa- 
tion is available". Automatidty develops through constant exposure to meaning- 
ful texts. From Grades 1 to 3 the technical aspects of reading (decoding) are em- 
phasised. By the end of Grade 3 decoding skills should be well developed and 
become increasingly automatised. Grades 4—7 are crudal for the further develop- 
ment of automatic decoding as well as strong comprehension reading abilities for 
it is on these foundational abilities that academic literacy is developed. 

Text demands 

From Grade 4 onwards literacy becomes a vehicle for transmitting information 
via informational and expository texts. Academic language profidency demands 
increase significantly as texts start taking on forms and functions not previously 
encountered. Expository texts contain less familiar words not encountered in 
everyday language, they contain longer and more complex sentences, conceptu- 
ally the texts become more dense, complex and abstract. Visual literacy demands 
also increase as learners 'read' increasingly sophisticated tables, charts, maps and 
diagrams. Cause and effect, compare and contrast, and problem-solution struc- 
tures dominate. Even learners who have mastered the basics of storybook reading 
may experience problems when they encounter extended texts, new registers and 
words. 

Comprehension processes 

Comprehension involves not only the processing of linguistic information but 
also the processing of textual knowledge and general background knowledge, 
which entail massive amounts of cognitive processing involving inferring, un- 
derstanding, integrating, evaluatinginformation within and across texts, recog- 
nising inconsistencies in text information, monitoring the comprehension proc- 
ess and applyingrepair strategies when comprehension breaks down. Compre- 
hension enables the addition of new knowledge gained from texts to existing 


Ensovoort: jaargang 1 1 , NOMMER 2, 2007 


109 


knowledge bases in memory, and the modification of existing knowledge bases 
in memory in response to information acquired from texts. 

Ironically, after the Foundation phase very little attention is given to explicitly 
teaching reading even though learners are increasingly expected to read to learn 
from content subject textbooks, putting new linguistic and cognitive demands 
on learners. As Kucer (2005: 34) points out, "things that could be overlooked or 
avoided in shorter 'single sitting' texts become increasingly problematic". In many 
South African classrooms teachers tend to assume that the literacy basics have 
been taught and so they concentrate on content subject information. Yet, for 
many leamers the transition from decoding to comprehension does not happen 
easily, especially if they are not regularly exposed to meaningful reading activi- 
ties involving extended and authentic texts. 

Research questions 

While academic literacy involves more than being able to decode and compre- 
hend texts, the knowledge, skills and processes involved in reading form the core 
of academic literacy. Without automatic decoding and sound comprehension 
abilities, academic literacy cannot develop properly. 

By the time that Grade 7s reach the final year of primary school they are 
expected to have developed various comprehension competencies involvingin- 
terpretation, synthesis and evaluation. In reality the picture may be very differ- 
ent. The research reported on in this article addresses the question of academic 
literacy by examining primary school exit level reading abilities in both the Ll, 
Northem Sotho (henceforth N Sotho), and English, the language of leaming and 
teaching. There are four questions that inform the study: 

1. What is the reading profile of Grade 7 leamers from three different urban 
township schools, in N Sotho and English? 

2 How does reading performance in Grade 7 relate to academic perform- 
ance? 

3. What are the implications of these findings for the development of aca- 
demic literacy in primary schools? 

4 What light does this research shed on the future preparedness (or not) of 
students entering the HE sector? 


3. Methodology 

Broader context 

The three primary schools from which the data were obtained are all situated in 
a predominantly N Sotho/Tswana speaking township in Gauteng province. There 
is one private and 26 state primary schools in the township. Of the state primary 
schools, 10 are predominantly N Sotho speaking, 9 Tswana, 3 Zulu, 2 Tsonga, 1 


110 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


Venda and 1 South Sotho. In the majority of these schools, initial schooling takes 
place in an African language, from Grade 1 to Grade 3. The switch to English as 
LoLT is made in Grade 4. Thereafter the specific African language is taught as a 
first language subject. 

The intervention project 

Two of the above primary schools (Schools B and P) are involved in a reading 
intervention programme, the aim of which is to make reading an integral part of 
daily school activities. It is hoped that by developing a culture of reading these 
schools will be able to improve the overall language and academic development 
of the learners. To this end a multi-level approach has been adopted that empha- 
sises the building up of print- based resources as well as capacity, and involves 
the participation of the learners, teachers and parents. 

The intervention project assists the schools in setting up a functional school 
library where leamers have easy access to age appropriate books in both N Sotho 
and English. Besides the library, the schools' resources are also enhanced by 
making teachers at all grade levels aware of the need to create print-rich class- 
room environments. 

Because literacy resources have no value if not used properly, teachers (and 
parents) need to be shown what to do with books. The intervention thus also 
focuses on developing the instructional capacity of the teachers and the support- 
ive capacity of the parents. Workshops are held fortnightly with the teachers after 
school to increase teachers' understanding of the reading process. Arrangements 
are also made for teachers to take tums, by prior appointment, to spend a morn- 
ing observing good reading practice in a grade equivalent classroom in a highly 
effective school where reading is a priority (e.g. School M below). 

A family literacy component is also included in the project to involve parents 
more actively in the literacy development of their children. To this end a series of 
Family Literacy workshops are held for parents. The aim of these workshops is to 
draw parents' attention to the importance of reading, to encourage them to read 
to their children and/or to listen to their children reading, to take an interest in 
children's school activities, make time and space available in the home for home- 
work, encourage membership of the local community library, and so on. 

Schools B and P (state schools) 

School B has over 600 learners and a staff of 16 teachers. The school serves a 
sodoeconomically disadvantaged community. School fees were R120 (about $20) 
per annum but at the end of 2006 the school was declared a non-fee paying 
school. The school has a feeding scheme, where 400 children are fed once a day. 
For many of these children, this is the only meal of the day. 

There are two classes at each grade level. In the Foundation Phase, there are 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


1 1 1 


about 35 children per elass. This increases to around 50 per class in Grade 7. N 
Sotho is the initial language of leaming and teaching from Grade R to Grade 3, 
after which English becomes the LoLT N Sotho is taught as a subject from Grades 
4-7. Although many children come from homes in which a variety of African 
languages are spoken, about 70% of the learners at this school come from prima- 
rily N Sotho speakinghomes. 

Similar to School B, School P also has about 600 leamers and a staff of 16 
teachers. It also serves a low socioeconomic community, it too has a school feed- 
ing scheme, and most of the learners at this school also come from primarily N 
Sotho speaking homes. However, unlike School B where initial literacy and nu- 
meracy is taught in N Sotho to the end of Grade 3, School P has a 'straight for 
English' policy from Grade 1. N Sotho is taught as a subject from Grades 2 to 7. 

For ease of reference, Schools B and P will be referred to collectively as the 
'township' schools. To monitor project progress, all the Grade 7 learners at both 
schools are tested each year for language and reading ability in N Sotho and 
English. 

School M (private school) 

School M is a small private primary school that was opened in the township in 
1991. Even though it serves the same township community, many of the children 
at this school come from higher socioeconomic homes. However, there are also 
several children from poor homes who attend the school on scholarships. 

The classrooms are well resourced and the teachers well qualified, experi- 
enced and dedicated. Classes are small (abut 25 learner per class). Reading and 
storybooks are an integral part of each classroom in the lower grades, and teach- 
ers have high reading expectations of learners. Teachers from Schools B and P 
attend the private school for classroom observations and occasional workshops, 
and closer ties are being forged between these schools. 

The school has a 'straight for English' policy. The learners at this school are 
not linguistically homogeneous but speak different African languages at home. 
No African languages are taught as subjects. Unlike many private schools or ex- 
Model C schools, the leamers at this school do not have peers for whom English 
is an Ll. 


4. Language and reading assessment 

The 2006 cohorts of Grade 7 learners at the three schools were administered a 
reading test in English and in N Sotho. The learners were also all given a lan- 
guage test in English and N Sotho to enable exploration of the language-reading 
relationship. Since no African languages are taught as a subject at School M, the 
Grade 7s at this school only completed the English language and reading tests. 


112 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


Language proficiency 

In this study language proficiency was operationalised as performance on a dic- 
tation test in each language. According to Oller (1979:58), dictation correlates "at 
surprisingly high levels with a vast array of other language tests". This correla- 
tion points to dictation tasks tapping into similar knowledge sources that stand- 
ardised language tests tap into but it does so via the auditory rather than the 
written medium. 

A dictation test is an integrative, holistic language test - provided it is admin- 
istered appropriately. A passage that is dictated word-for-word becomes a short 
term memory test and hence not much use as a measure of language knowledge. 
Instead, the dictation passage is first read at normal conversational pace while the 
testees simply listen. The second time it is read at conversational pace, but chunked 
into natural sections of about 5-6 words which are not repeated. This kind of 
task meets the two naturalness criteria for natural language processing tasks, viz. 
it requires the processing of temporally constrained sequences of linguistic mate- 
rial and, in order to divide the stream of speech into identifiable chunks for 
writing, it requires an understanding of the meaning of what was heard (Oller, 
1979:39). 

The dictation passages were taken from current Grade 7 textbooks. A set of 
criteria was drawn up jointly by the English and N Sotho team members for the 
marking of the dictation passages. Spelling and punctuation were also taken into 
account. For the N Sotho dictation, words that were written conjunctively instead 
of disjunctively were accepted as correct, provided they were spelled correctly. 

Reading comprehension 

N Sotho and English reading proficiency was operationally defined as proficien- 
cy obtained in a reading comprehension test where a combination of test items 
was used for both narrative and expository texts. The texts were taken from exist- 
ing Grade 7 textbooks. The test items that were designed included multiple choice 
questions of an inferential nature, vocabulary questions, cloze items, identifying 
referents of anaphoric items, and questions involving graphic information, e.g. 
maps and graphs. 

Reading rate 

During the reading test an informal measure of the learners' reading rate was 
taken. After the test preliminaries, the learners were instructed to start reading. 
After a minute, they were stopped and asked to circle the word they had been 
reading. Readers then continued the passage and answered the questions that 
followed. The number of words read gave a rough indication of reading rate. 
Because it is difficult to accurately assess reading rate in large groups, the scores 
are treated with caution. 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


113 


Administrative procedures 

The English tests were administered first, in October. To reduce memory effects, 
the N Sotho tests were written about 3 weeks after the English tests. Both sets of 
tests were written during two periods allocated during school hours and admin- 
istered by the project researchers. No specific time limits were set for completion 
ofthetests. 


5. Results 

The data were captured and analysed using SPSS. Using the Cronbach alpha 
model, the reliability score for the English tests was .74 respectively, while the 
alpha score for the N Sotho pre- and posttests was .75. Given the small scale 
nature of the study, these alpha scores are regarded as acceptable (e.g. George & 
Mallory, 2003:231). 

Descriptive statistics were used to explore the first question, viz. 

1. What are the language and reading profiles of Grade 7 leamers from three 
different township schools? 

Table 1 reflects the mean, minimum and maximum scores. The quartiles for read- 
ing in English show how the distribution of learners' scores compare across the 
schools. Three interestingpatterns emerged. Firstly, as can be seen from the table, 
the two township schools performed similarly in reading, in both N Sotho and 
English, with both schools showing low reading levels. In contrast, the private 
school far outperformed the intervention schools on language and reading scores 
and the learners also had faster reading rates. Secondly, one notes the large dis- 
crepancy between performance on the language and reading tests in the town- 
ship schools (almost 30% in the case of N Sotho). Thirdly, the N Sotho reading 
scores in the township schools lagged behind the English reading scores, and the 
learners read more slowly in N Sotho than in English. 

Table 2 shows the breakdown of performance in the different components of the 
reading test. There are two trends to note here. Firstly, there is again the striking 
differential performance on all components of the reading test between the town- 
ship schools on the one hand and the private school on the other hand. The town- 
ship schools show low and, in general, fairly similar scores on all the measures; in 
contrast, the private school scored well on all the measures. The inferendng, cloze 
and anaphoric components of the tests are all aspects of reading that require read- 
ers to perceive connections between text units in order to make sense of the text as 
a whole, yet it is these aspects that posed challenges for the learners in the town- 
ship schools. Learners who struggle with these kinds of test items are usually 
leamers who have not yet learned to be strategic, meaning-making readers. 


114 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


Table 1 : Grade 7 comparison of mean percentages across the schools 

November 2006 




Township School P 

Township School B 

Private School 


(n = 54) 

(n = 50) 

(n = 25) 

Average age 

13.6 

13.6 

13.4 

(Range of years) 

(11-16) 

(11-16) 

(12-14) 

Mean NS dictation % 

64.5 

67.7 


SD 

27.04 

25.65 

— 

Minimum 

0 

0 


Maximum 

97 

98 


Mean NS reading comprehension % 

36.06 

38.1 


SD 

21.57 

19.62 

— 

Minimum 

2 

6 


Maximum 

91 

82 


Mean English dictation % 

63.4 

55.2 

92.4 

SD 

30.55 

33.54 

13.14 

Minimum 

0 

2 

45.2 

Maximum 

98 

100 

100 

Mean English reading comprehension % 

46.2 

44.5 

80.4 

SD 

19.89 

19.99 

11.92 

Minimum 

11 

9 

52.3 

Maximum 

97 

85 

95.2 

Percentiles 25 

30.9 

30.9 

72.6 

50 

42.8 

39.2 

80.9 

75 

57.1 

62.5 

90.4 

Mean Reading rate N Sotho 

102 

106 

— 

(words per minute) English 

132 

131 

169 


Secondly, one notes in the township schools similarities in performance be- 
tween scores in N Sotho and English on component parts of the test. For example, 
learners' inferencing abilities were similar, irrespective of whether they were 
answering an inference question in N Sotho or in English. The cloze and ana- 
phoric components, items that require fairly close attention to textual details for 
meaning construction within and across sentence boundaries, yielded relatively 
better performance in English than in N Sotho. The only component of the read- 
ing test that township leamers coped with relatively adequately was the section 
relating to questions about a graph (58% and 60% respectively). This was also the 
easiest component of the reading test for the learners at the private school, and 
they sailed through it with a 93% average. 

To further explore the relation between performance in reading in the two 
languages in the township schools, a Pearson Product Moment correlation was 
applied, yielding a robust and highly significant correlation between reading in 
N Sotho and reading in English: r = .17 (p < 0.0001). In other words, if learners 
were good at reading in one language, they tended to be good at reading in the 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


115 


Table 2: Grade 7 Comparison of components of English reading 
comprehension across the schools November 2006 



Township School P 

Township School B 

Private School 


NS 

Eng 

NS 

Eng 


Mean reading comprehension % 

36.06 

46.2 

38.1 

44.5 

80.4 

Components of reading test % 






Vocabulary 

47.3 

48.8 

46.3 

42.6 

82.6 

Inferencing 

48.8 

49.7 

50.3 

49.6 

74.6 

Cloze 

27.5 

43.2 

33.6 

41.4 

77.4 

Reading graphs 

58.9 

58.2 

60.8 

47.3 

93.6 

Anaphoric resolution 

25.0 

36.6 

18.1 

41.0 

80.5 


other language; similarly, poor readers in one language were aiso poor readers in 
the other language. 

The second research question seeks to examine the link between reading and 
school-based performance: 

2. How does reading performance in Grade 7 relate to academic perform- 
ance? 

In order to address this queshon, the leamers' scores in the Grade 7 end-of-year 
examinations were obtained and a mean score computed from the eight subjects 
(English, N Sotho, Afrikaans, Maths, Natural Science, Social Science, Life Orien- 
tation and Technology). The learners were then placed into four achievement 
categories used by the Department of Education, viz. NotAchieved (0-39%), Partial- 
lyAchieved (40-49 % ), Achieved (50-69%) and Outstanding (70-100%). The means for 
L1 and L2 language and reading of the learners in each of these four levels were 
then tabulated. The results are shown in Table 3. It is clear from the table, firstly, 
that across the schools there is a trend of increased language and reading ability 
associated with academic category. Learners in the Not Achieved category had 
much lower reading scores than learners in the Partially Achieved category, who 
in tum had lower reading scores than those in the Achieved group; learners in the 
Outstajiding group were all competent readers. 

Secondly, the trend of increased language and reading ability associated with 
academic category operated at two different levels according to the type of school. 
In other words, the notion of 'good' and 'poor ' reader was relative to the school 
context. The Not Achieved learners at the private school were reading at higher 
levels than the learners in the Achieved category in the township schools. This 
differential trend is clearly seen in Figure 1, where the mean reading scores are 
given of learners in each academic category across the three schools. Table 3 also 
clearly illustrates two other features previously identified, namely, the lower N 


116 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


Table 3: 2006 Comparison of L1 and L2 language and reading proficiency 

across the four academic achievement categories 


Academic 

Language & reading 

Township School B 

Township School P 

Private school 

category 

assessment 

Mean 

Mean 

Mean 

Not Achieved 

L1 language 

24.1 

37.6 

- 


L1 reading 

13.3 

12.2 

- 


L2 language 

11.9 

17.6 

79.7 


L2 reading 

19.04 

20.2 

63.6 

Partly Achieved 

L1 language 

48.4 

58.6 

- 


L1 reading 

26.3 

30.3 

- 


L2 language 

27.1 

50.6 

84.8 


L2 reading 

32.9 

38.6 

70.8 

Achieved 

L1 language 

77.4 

69.1 

- 


L1 reading 

42.3 

39.2 

- 


L2 language 

67.8 

74.1 

96.9 


L2 reading 

48.3 

50.4 

86 

Outstanding 

L1 language 

94.6 

84.3 

- 


L1 reading 

65 

60 

- 


L2 language 

96.7 

87.6 

99.4 


L2 reading 

78.5 

72.3 

91.6 


Sotho reading scores in relation to English with each group, and the wide gap 
between the dictation and reading scores in both languages in the township 
schools. 

It is also instructive to see how learners in the different academic groups 
coped with the various components of the reading test, as reflected in Table 4. At 
this point it is important to note that the learners at the private school finished the 
reading test in half the time it took the township learners to complete the test. 
Several of them made remarks afterwards such as "Oh, mam, that was easy!" or 
"That was fun! When are you coming again?" These observations are supported 
by the outcomes reflected in Table 4. 

Here too we see a trend of increased performance in all the reading compo- 
nents across the academic groups, with fairly large differences in the township 
schools in mean scores for the inferencing, cloze and anaphoric components 
between the Partly Achieved and the Achieved learners, and again between the 
Achieved and the Outstanding learners. There is also a clear differential distribu- 
tion of mean scores across the academic groups between the township schools 
and the private school. The Not Achieved and Partly Achieved learners at the private 
school performed most poorly on the cloze item (50%), but this was still higher 
than the Achieved leamers' performance on the same item in the township schools. 
These results clearly suggest that even learners who have failed at the private 
school have achieved much higher levels of literacy accomplishment than a great 
many of the learners who pass in the township schools. 


Ensovoort: jaargang 1 1, nommer 2, 2007 


117 



Figure 1 : Differential reading performance across the schools 


Table 4: Performance in components of reading test across the 
four academic groups 



Not Achieved 

Partly Achieved 

Achieved 

Outstanding 

Vocabulary questions 

School P 

30.9 

47.4 

52.7 

60 

School B 

27 

33.3 

45.2 

54.1 

Private 

70.8 

87.5 

86.6 

94.1 

Inferencing questions 

School P 

30.9 

35.8 

56.1 

73.3 

School B 

33.3 

42.1 

51.2 

75 

Private 

62.5 

66.6 

78.8 

83.3 

Cloze questions 

School P 

13.4 

33.4 

49.4 

78.8 

School B 

5.8 

23.3 

46.3 

92.6 

Private 

50 

61.7 

86.6 

94.1 

Anaphoric resolution 

School P 

25 

22.1 

40 

67.5 

School B 

0 

33 

43.3 

69 

Private 

71.8 

75 

83.3 

88 

Reading a graph 

School P 

45.7 

6 

60.6 

68 

School B 

0 

37.2 

52.3 

67 

Private 

80 

90 

97 

100 


118 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


Finally, to explore the relationship between the language and reading varia- 
bles (as independent or predictor variables) and academic performance (as the 
dependent variable), stepwise regression analyses were performed separately for 
the township schools and for the private school. Three learners had marks that 
were excessively influencing the models so these leverage points were removed. 
Even though the outcomes varied slightly with their removal, significant models 
still obtained for both school groups. For the township schools, using the step- 
wise method, a significant model emerged (F 2m = 57.0, p < 0.000). Adjusted R 
square = 0.52. For the private school, Reading Comprehension was a significant 
predictor (F^ ^ = 93.5, p < 0.000). Adjusted R square = 0.80. Standardised Beta 
coefficients identifying significant predictor variables are shown in Table 5. 


Table 5: Predictor variables for exam performance 



Predictor variables 

Beta 


Township schools 

L2 Reading comprehension 

0.452 

p < 0.0005 


L2 Language 

0.333 

p = 0.001 

Private school 

L2 Reading comprehension 

0.900 

p < 0.0005 


In the township schools English Reading Comprehension alone accounted for 
48% of the variance. The inclusion of English Language resulted in explaining an 
additional 5% of the variance. This model accounted for 52% of the variance. In 
the township schools N Sotho Language and N Sotho Reading Comprehension 
did not predict exam performance. In the private school English Reading Com- 
prehension was a sufficient predictor for exam performance, accounting for 80% 
of the variance. These results confirm that for both the township and the private 
schools, there is a strong relationship between reading ability and academic per- 
formance. 

6. Discussion 

There are three salient findings to emerge from the data in this study. Firstly, there 
is the large discrepancy in reading performance between the township schools 
and the private school, with the former showing disturbingly low levels of read- 
ing and the latter showing consistently high levels of reading performance. It 
must be noted that Schools B and P are fairly 'typical' township schools so one 
can presume that their performance on the language and reading measures are 
not uncharacteristic of similar township schools - which constitute the bulk of 
schools in South Africa (Gustafsson, 2005). If after seven years of primary school- 
ing learners enter high schools with such low literacy outcomes, it is unlikely 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


119 


that they will be able to catch up during five years of high school when the 
pedagogic focus is on content subjects, not literacy development. The implica- 
tions are clear: the success of academic redress and equity lies in the quality of 
education in primary schools. 

The second salient finding relates to the consistent and robust relationship 
that obtained between reading comprehension and academic performance across 
all three schools. The better the reading scores, the stronger the leamers' academ- 
ic achievement. The implications are clear: in order to improve academic per- 
formance schools need to focus specifically on improvingleamers' readingskills. 
Primary schools have a major role to play in this regard. 

The third salient finding is applicable to the township schools and relates to 
reading in N Sotho. Reading in the L1 lagged behind reading in Hnglish, and 
large differences generally obtained between performance on the language and 
the reading tests, particularly in N Sotho. Although reading levels were general- 
ly low in both languages, the finding that the leamers read somewhat faster and 
better in English relative to N Sotho is perhaps not surprising, considering that 
most of the reading activities that occur within classrooms tend to be done in 
English, the LoLT. This finding supports the view that, though dependent on 
language, reading is a distinct ability which develops in specific ways. If chil- 
dren are not regularly exposed to texts, they do not become good readers. Read- 
ing in their own language did not confer an advantage on the leamers, even 
when they did well on the N Sotho dictation test. Having N Sotho as a home 
language is not a sufficient condition for becoming a reader in N Sotho. The 
implications are clear: decisions about language policy in schools must take into 
account the fact that language profidency (in the L1 or an AL) is an important but 
not a sufficient condition for academic performance. 

In the next section we briefly consider what these findings suggest about the 
nature of academic literacy development in primary schools, and the dynamics 
underlying academic (un)preparedness. 

Implications for the development ofacademic literacy in schools 
In all our schools there is a general concem to develop the language profidency of 
learners, the assumption being that language profidency is the gateway to success 
at school, especially proficiency in the LoLT. However, as many studies have 
found (e.g. Cummins, 1991, 2000; Snow & Dickinson, 1991; Snow, Bums & Griffin, 
1998), general language profidency is not a consistent predictor of academic per- 
formance. Although many of the learners in this study performed quite well on 
the dictation test which taps into a more general knowledge of language, it was 
their performance on reading especially that predicted academic performance. 

It is largely through regular exposure to the 'book language' of print material 
over many years that leamers become rapid and fluent readers who use cues in 


120 


Ensovoort: jaargang 1 1, nommbr 2, 2007 


the texts to construct meaning. Through regular reading leamers are also exposed 
to the vocabulary, text structures and conventions that characterise academic 
discourse, and thus they develop academic language proficiency that underpins 
success in the learning context. 

Three of the basic building blocks for the development of academic literacy 
in primary schools are easy access to a variety of books, opportunity to read, and 
motivation to read. These factors tend to be absent in the majority of township 
schools that serve poor communities. In our South African context, many learn- 
ers don't read well simply because they don't read enough: the schools they 
attend do not make reading a real priority, very little time is spent on reading, 
and print resources with which to inculcate good reading practices are virtual- 
ly absent. Attention to reading and nurturing a positive attitude towards read- 
ing is vitally important for developing automaticity on which the development 
of more sophisticated comprehension processing can build. The slow reading 
rate of the learners in the township schools is indicative of learners who do 
very little reading of extended texts. Only regular, extensive reading leads to 
increased reading speed, which leads to improved working memory capacity, 
which in turn facilitates comprehension processing (Walczyk, Marsiglia, Johns 
& Bryan, 2004). 

Based on his research into the development of comprehension and represen- 
tation, Van den Broek (1997:321) states that "(o)ne of the most essential aspects of 
our understanding of the world ... is the ability to recognise the relations be- 
tween the events that we encounter ". The same prindple applies to reading and 
is consistently reflected in the reading results from this study : the learners at all 
three schools who demonstrated an ability to infer the relations between the 
events they encountered in their texts, as reflected especially in the inference, 
cloze and anaphoric items, were learners who understood their text world. This 
ability to focus attention on relevant information and to build connections is 
fundamental to academic literacy. This suggests an ability to construct meaning 
at a deeper level of processingby locating details and utilising relevant textual 
information. Not only did the good readers understand their text world better 
than their peers who performed poorly on these components, they were also 
performingbetter academically because this ability gave them the potential to 
integrate their text world into their knowledge bases and thereby acquire new 
knowledge. 

According to Cummins' Linguistic Interdependence Hypothesis (1991, 2000), 
academic literacy operations and constructs transfer across languages and do not 
have to be "relearnt" in another language. It is usually assumed that the direction 
of influence is from the home language to an AL, and indeed most the findings 
from the developed world support this assumption (cf. Cummins, 2000). The 
findings from the township schools lend some support for this hypothesis, as 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


121 


seen in the strong correlation that obtained between reading in N Sotho and 
English in the township schools. However, the fact that the stronger readers were 
consistently stronger in reading in English than they were in N Sotho suggests 
that the direction of influence is coming from English, not the home language. 
The findings from this study suggest that for many learners academic literacy is 
more strongly developed in English than in the home language. 

Based on her research into the teaching of Southem Sotho as home language 
at primary school, Smyth (2002:93, 194-195) argues that learners should be given 
the opportunity to develop academic literacy in their home languages in order to 
provide a sound conceptual and linguistic basis for future learning across all 
content leaming areas. This is not happening yet, despite school language policy 
that, theoretically, makes this possible. The lower reading levels and slower read- 
ing rates in N Sotho in the current study are strongly indicative that not enough 
reading is being done in N Sotho. One of the factors that may contribute to this 
situation lies in the diglossic differences between spoken and written N Sotho. 
The N Sotho that is spoken in the Pretoria area (Sesotho sa Pretoria) is different in 
many respects from the language of standard N Sotho (Sesotho sa Leboa). Leam- 
ers do not have enough exposure to written N Sotho and opportunities to devel- 
op proficiency in interpreting and using written forms of the language. The lack 
of resources exacerbates this situation. For example, since the start of the interven- 
tion project the iibrary at School B has grown from 200 books to over 3,500 books, 
yet there are only 139 titles of N Sotho books in the library, despite efforts to 
purchase more N Sotho books. Most of these are storybooks intended for children 
under the age of about 10 years: teenage fiction and non-fiction in N Sotho is 
practically non-existent. 

The findings from the private school indicate that when conditions in a 
school are conducive to leaming, very high literacy levels can be achieved, even 
when the LoLT is not the L1 of the learners. This suggests that in multilingual 
learning contexts in which print materials in the L1 are scarce, quality of school- 
ing is a stronger determinant of the development of academic language profi- 
ciency than the language in which and through which such development oc- 
curs. 

Shedding light on thecauses ofacademic 'unpreparedness’ 

Given the strong relationship that prevails worldwide between socioeconomic 
factors and school achievement (e.g. Allington, 2002; Bradley & Corwyn, 2002), it 
could be argued that the discrepancies in performance between the two groups 
of schools in this study are not surprising, given their socioeconomic differences. 
It is not easy educating poor children, not because children from poor homes are 
inherently weaker than children from middle class homes, but because poor 
children attend schools that tend to be poorly resourced and managed, with 


122 


Ensovoort: ]aargang 1 1, nommer 2, 2007 


large classes and fewer well qualified teachers. Poor children also come from 
homes that contain few literacy resources and whose parents have lower literacy 
levels. Combined, these SES-driven home and school factors conspire to create 
barriers to leaming and literacy accomplishments. 

Rather than see SES as a moderator variable that influences a dependent 
variable such as school outcomes, one can consider instead what factors might 
mediate the effects of poverty. Unintentionally, poor schools may be complicit 
in their learners' poor performance. Before the start of the intervention project, 
there was very little reading happening inside or outside the classrooms in the 
two township schools in this study, and very few books to which the learners 
had access. The classrooms were characterised by an absence of print-based 
material, reading homework was non-existent and learners' reading develop- 
ment was not monitored. Reading instruction in the early grades basically con- 
sisted of learners reading lists of syllables and words off the blackboard. Learn- 
ers had no exposure to storybooks to practise their reading skills in N Sotho or 
English, or to discover the joy of reading. This was in strong contrast to the 
private school. Admittedly the latter school is more affluent, its classrooms are 
well resourced and it has much smaller class sizes. However, in addition, the 
school is well run, classroom time is well managed, reading is a priority, much 
'time on task' is devoted to readingand writing activities, high standards are 
expected, learners are regularly assessed and learners with reading problems 
are identified and given attention. 

Approaches that seek to identify factors that mediate the effects of disad van- 
tage have given rise to studies that identify 'resilient' schools - schools who de- 
spite their disadvantaged circumstances manage to achieve high academic stand- 
ards (e.g. Wharton-McDonald et al., 1998; Taylor et al., 2000). Since schools cannot 
change the socioeconomic status of the communities they serve, they should 
change themselves by becoming strong sites of literacy development. Even though 
they remain poor schools, the two township schools in this study are becoming 
'print rich' poor schools with a stronger reading focus. Book resources are in- 
creasing, teachers are introducing more reading activities into their classrooms, 
there is an explicit public discourse about reading at the school, a 30 minute 
literacy period for all grades has been built into the daily timetable, and parents 
are encouraged to support their children's literacy activities at home. In effect, 
these schools are adopting some of the characteristics of effective schools and in 
so doing, they are becoming resilient schools. 

Even though reading levels remain low, both schools have shown modest 
improvements in reading. There are now fewer 'nonreaders' (those who score 
below 25% on reading tests) than previously. These changes notwithstanding, 
the low reading levels and the reading backlog that has built up over the years 
mean that the reading problems at these schools will not disappear overnight. 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


123 


Although reading accounted for 48% of the variance in academic performance 
despite the poverty, there are still other factors at these schools that are influenc- 
ing academic performance. 

The fact that it takes longer to develop academic language proficiency (4-7 
years) than general language proficiency must be considered in conjunction 
with a phenomenon well documented in reading research, namely Matthew 
effects (i.e. the rich get richer and the poor get poorer situation), where good 
readers getbetter while weak readers get weaker in relation to their good reading 
peers (e.g. Spear-Swerling & Stemberg 1996). Stanovich (1986) argues that the gap 
between good and weak readers widens as learners move up the educational 
ladder. The findings in this study showed that the reading gap between the Not 
Achieved and the Outstanding leamers was already quite considerable in Grade 7. 
Despite seven years of schooling, many of the learners at the township schools 
had a backlog of reading skills to catch up at a time when they were going to 
encounter new cognitive, linguistic and textual demands of high school. It is 
unlikely that disadvantaged high schools will be able to improve the literacy 
levels of weak learners coming into these schools. The importance of developing 
sound reading abilities in primary schools cannot be overemphasised; this will 
minimise Matthew effects and enable academic literacy to start developing dur- 
ing the seven years of primary schooling. 


7. Conclusion 

It is clear that reading is a powerful learning tool for constructing meaning and 
acquiring new knowledge in the learning context. It also affords readers inde- 
pendent access to information in an increasingly information-driven society. If 
leamers do not start properly mastering this tool during the primary school years, 
then their potential for success in the learning context is handicapped from the 
start. If we use reading as a means of "looking into the seeds of time" in the 
learning context, we can make fairly reliable predictions about which seeds are 
likely to grow and which will not: learners with good reading ability will suc- 
ceed. Unless urgent measures are put into place to develop sound reading abili- 
ties in our primary schools, current concems about prospective students' ability 
to cope in the HE sector will continue to occupy us for decades. 


124 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


Acknowledgements 

The 'Readingis FUNdamental' project, from which this research derives, is funded by the DG 
Murray Trust and is also supported by the National Research Foundation. Sincere thanks are 
due to all the leamers and staff at the three schools for participating so generously and 
willingly in the project. Thanks are also due to Chris Gilfillan for statistical assistance, and to 
the ALRU project team for their steadfast commitment and support: Sally Currin, Nicoline 
Wessels, Debbie Mampuru, Matseleng Mokhwesana, Riah Mabule and Kgalabi Maseko. 


Bibliography 

Allington, R.L. 2002. Big Brotherand the National Reading Curriculum. Portsmouth NH: Heine- 
mann. 

Bachman, L.F. & Palmer, A.S. 1996. Language testing in practice. Hong Kong: Oxford University 
Press. 

Bradley, R.H. & Corwyn, R.F. 2002. Socioeconomic status and child development. Annual 
Review ofPsychology 53: 371-399. 

Corson, D. 1997. The leaming and use of academic English words. Language Learning 47(4):671- 
718. 

Cummins, J. 1979. Linguistic interdependence and the educational developmentof bilingual 
children. Review ofEducational Research 49: 222-251 . 

Cummins, J. 1991 . Conversational and academic language proficiency in bilingual contexts. 
AILA Review 8: 75-89. 

Cummins, J. 2000. Language, power and pedagogy. Clevedon: Multilingual Matters. 

Department of Education. 2003. National report on systemic evaluation: Mainstream Founda- 
tion Phase. Pretoria: Department of Education. 

Department of Education. 2005. Systemic Evaluation Report: Intermediate Phase Grade 6. 
Pretoria: Department of Education. 

Gee, J. 1996. Social linguistics and literacies: Ideology in discourses. New York: Falmer. 

George, D. & Mallory, E 2003. SPSSfor Windows. Boston: Allyn & Bacon. 

Griesel, H. (ed.). 2006. Access and entry level benchmarks. The National Benchmark Tests Project. 
Pretoria: Higher Education South Africa. 

Gustafsson, M. 2005. The relationship between schooling inputs and outputs in South Africa: 
Methodologies and policy recommendations based on the 2000 SACMEQ dataseL [On- 
line]. Available at http://www.sacmeq.org/links.htm 

Kucer, S.B. 2005. Dimensions ofliteracy. Mahwah, NJ : Lawrence Erlbaum. 

Mayer, R.E. 1992. Guiding students' cognitive processing of sdentific information in text. In:. 
Pressley, M., Harris, K.R..& Guthrie, J.T. (eds). 1992. Promoting academic competence and literacy 
in school. San Diego: Academic Press: 243-258. 

Oller, J.W 1979. Language tests at school. London: Longman. 

Smyth, A. 2002. Testing the foundations: An exploration of cognitive academic language 

development in an African home-language course. Unpublished doctoral thesis. Johannes- 
burg: University of the Witwatersrand. 

Snow, C.E., & Dickinson, D.K. 1991. Skills that aren't basic in a new conception of literacy. In: 
Jennings, E.M. & Purves, A.C. (eds.). Literate systems and individual lives. New York: State 
University of New York Press. 

Snow, C.E., Burns, M.S. & Griffin, P 1998. Preventing reading difficulties in young children, 
Washington, DC: National Academy Press. 

Spear-Sterling L. & Stemberg, R.J. 1996. Offtrack: When poor readers become ‘leaming disabled'. 
Boulder, Co: Westview Press. 

Stanovich, K.E. 1986. Matthew effects in reading: Some consequences of individual differences 
in the acquisition of literacy. Reading Research Quarterly 21 :360-406. 

Taylor, B.M., Pearson, PD., Clark, K. & Walpole, S. 2000. Effective schools and accomplished 
teachers: Lessons about primary-grade reading instruction in low-income schools. The 
Elementary School joumal, 101:121-165. 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


125 


Thomas, WP & Collier, V. 1997. School effedivenessfor language minority students. Washington, 
DC: National Clearinghouse for Bilingual Education. 

Van den Broek, E 1997. Discovering the cement of the universe: the development of event 
comprehension from childhood to adulthood. In: Van den Broek, E, Bauer, EJ. & Bourg, 
T (eds.). Developmental spans in event comprehension and representation. Mahwah, NJ: Lawrens 
Erlbaum Associates. 

Walczyk, J., Marsiglia, C.S., Johns, A.K. & Bryan, K.S. 2004. Children's compensations for 
poorly automated readingskills. Discourse Processes 37(1): 25-66. 

Wharton-McDonald, R., Pressley, M. & Hampston, J.M. 1998. Literacy instruction in nine first- 
grade classrooms: Teacher characteristic and student achievement. The Elementary School 
Joumal, 99:101-128. 


Testing academic literacy over time: Is the academic Iiteracy of 
first year students deteriorating? 

Frans van der Slik 

Radboud University of Nijmegen 6t Research associate, University of Pretoria 
Albert Weideman 
University of Pretoria 


Testing academic iiteracy over time: Is the academic iiteracy of first year 
students deteriorating? 

How much empirical evidence is therefor the frequently expressed opinion that the 
academic literacy levels offirst year students at South African universities are 
steadily deteriorating? Two tests ofacademic literacy used widely in South Africa, 
the Test ofacademic literacy levels (TALL) and its Afrikaans counterpart (TAG) 
may hold at least a partial answer to this question. We subject the administration, 
over theyears 2005-2007, ofone ofthese tests, the Toets van akademiese geletterd- 
heidsvlakke (TAG) toan IRT analysis, usinga One-Parameter Logistic Model 
(OPLM) package. The results show that, ifwe equalise the subsequent tests in 
terms ofthefirst administration, there is evidence that is contrary to thepopular 
opinion. More importantly, however, usingan OPLM analysis enables us to make 
more responsïble decisions derivedfrom test results, and so make our tests not 
only theoretically more defensible, but also more accountable to a larger public. 


1 . Introduction 

The debate about academic literacy in South Africa is situated at the interface 
between school and university education, or, in official terms, general and fur- 
ther education on the one hand, and higher education on the other. More often 
than not, questions are raised in terms of the readiness of students about to enter 
institutions of higher education, and specifically about their preparedness in 
terms of their ability to understand and use academic language within this new 
environment. It is therefore not surprising that there is already substantial expe- 
rience in South Africa on the design and use of tests of academic literacy both for 
access and placement purposes (cf. Cliff, Yeld & Hanslo, 2003; Cliff & Yeld, 2006; 
Visser & Hanslo 2005; Weideman, 2003, Van der Slik & Weideman 2005, 2007; Van 
Dyk & Weideman 2004a, 2004b). In the discussion that follows, the measurement 
of academic literacy levels is understood to refer to the assessment of the ability by 
students to use language at the appropriate and desired level within the academ- 
ic community, or their level of competence in academic discourse and its conven- 
tions, as this is defined in the work referred to here (especially Cliff & Yeld, 2006 
and Van Dyk & Weideman, 2004a). 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


127 


The interest in academic literacy levels is not confined to scholarly attention 
and investigation. It engages both experts and lay people in equal measure. A 
popularly expressed opinion would have it, for example, that the language abil- 
ities of our students are steadily decreasing over time. In South Africa, such atti- 
tudes are fed by occasional fairly sensationalist press reports of lower (and by 
implication lowering) literacy levels among pre-university learners (cf. for exam- 
ple Rademeyer, 2007). Without much ado, 'low' scores are interpreted as decreas- 
ing ability. The question is almost never asked whether the scores have not per- 
haps been as low as this for some time. Phrased differently: a chronic problem is 
not the same as standards that are lowering. 

Furthermore, it often escapes the readers of these reports that some of them 
have as their origin the testing of academic and other forms of literacy by those 
producing commercially designed tests. Readers are not told, in other words, 
that those with whom the 'evidence' originates may have a financial interest in 
the results of these kinds of report. Dwindling language ability among the younger 
generation is an opinion akin to a number of those that Widdowson (2005: 15f.) 
discusses as 'folklinguistics'. In the perception of those involved, such strongly 
held opinions find more than adequate evidence in their everyday experience. It 
is of course so that theoretical analysis ignores naïve experience at its peril. Yet in 
the present case one would do well to ask: is it indeed a matter of something 
experienced intuitively as almost self-evident, or could these merely be deeply 
held prejudices and biases that are masquerading as observations that are backed 
up by sufficient evidence? 

The current paper examines the question of decreasing levels of academic 
literacy obliquely, with reference to a number of tests conducted over time at 
North-West University (NW) and the Universities of Pretoria (UP) and Stellen- 
bosch (US). It belongs to a series of investigations that we have done to determine 
the stability and robustness of the tests both across various administrations with- 
in the participating institutions (Weideman & Van der Slik, 2007) and over time. 
The tests in question are the Test ofacadetnic literacy levels (TALL) and its Afrikaans 
counterpart, the Toets van akademiese geletterdheidsvlakke (TAG). The purpose of 
these investigations is to ensure in the first instance a measure of theoretical 
defensibility by telling, as Shohamy (2001) exhorts us to do, "the story of a test". 
This is a first step towards the eventual public accountability that a test must also 
achieve. In another paper (Weideman & Van der Slik, 2007; cf. too Van der Slik and 
Weideman 2005), for example, we have already checked if the tests produce relia- 
ble outcomes when they are administered to different populations of newly ar- 
rived students. We plan to extend that investigation by performing a number of 
longitudinal analyses that will inform us about the ability of the tests to predict 
risk brought about by lower than adequate levels of academic literacy when a 
student enrols for study in higher education. 


128 


Ensovoort: jaargang 1 1 , nommër 2, 2007 


Of course, an argument 
could potentially be made to 
back up a decision to vary 
the difficulty of the 
admission tests over the 
years perhaps for reasons of 
capacity, or for other reasons 
(see discussion, below). But 
by seeking to buiid in a 
guarantee, one at least has 
control over difficulty levels. 


These tests of academic literacy have now been used at the three different 
universities mentioned above since 2005. Recently, the test has also been adminis- 
tered to new students of the Medical Faculty of the University of Limpopo. Since 
the outcomes of the tests for the years 2006 and 2007 have now also become avail- 
able, we are currently in a position to give more serious consideration to the 
question raised in the subtitle of this article. 

Though this is easier said than done, one way of testing if secondary schooling 
nowadays tums out students whose ability is growing worse as compared to 
students from previous years is to compare their competence in academic lan- 
guage. What is needed for such a comparison is some Archimedean point that can 
be used to compare students' language abilities, spedfically their academic litera- 
cy, over the years. The tests of academic literacy levels referred to above might 
provide just such a fixed point. However, despite the fact that the TAG and TALL 
have been extensively pretested on groups with known academic language abili- 
ty, there is no absolute guarantee that the difficulty of the tests has rernained con- 
stant over the years. If, for example, the difficulty of the tests has increased over the 
years, one might arrive at the false conclusion that the academic literacy of first 
year students has deteriorated (while perhaps it has actualíy remained constant or 
has even risen). Needless to say, the outcomes of these anaJyses can have important 
consequences, both politically and for the lives of individual students. 

The latter point deserves some further elaboration. Until now, the tests of 
academic literacy referred to here have been employed as low to medium stakes 
tests. That is, based on their outcomes, low performing students were compelled 
to enrol for an intervention programme at UP and NW, while in the case of US 
students there is in certain faculties a gradual phasing in of such programmes. In 
all of these cases, no major disaster occurs for the students if, as a result of having 
taken a more difficult test, their academic literacy is underestimated as compared 
to the academic literacy of students of previous years. Some of the students may in 
such a case be compelled to undergo additional tuition that they perhaps did not 
fully need. But the picture will change dramatically if the test should be used as 
an admissions test. By their nature, such tests are high stakes tests, since they 
partially determine access to university education, and the expected lucrative 
future earning power that follows on this. In such a case it seems impera- 
— — tive that some guarantee needs to be given that students with a given level 
of academic literacy will have the same likelihood of passing the test, inde- 
pendent of the year in which they took the test. 1 

One way of designing such a guarantee into the process of test devel- 
opment and administration is to make use of Item Response Theory (IRT) 
models rather than of classical test theory. A prerequisite for making use of 
the advantages of IRT modelling is that tests partly overlap, i.e. items in test 
of year (t) are to be found in exactly the same format in the test in year ( t + 1 ) 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


129 


as well. Fortunately, this was the case for items of the TAG tests in 2005, 2006, and 
2007, and these three administrations of the test will therefore provide the basis 
for our analysis. We are not yet in a position to do the same for the other (English) 
version of the test (TALL), since here overlap is still either too small or absent. 
Though some of the discussion and analysis below will therefore refer to both 
TAG and TALL, since they are parallel tests, we envisage doing similar analyses 
on TALL once the degree of overlap is sufficient for such an analysis to be made. 
In such a case, we would test whether the findings presented here present a 
similar pattern. 


2. Method 

Population and context 

In January and February of 2005, 2006, and 2007, the academic literacy of all new 
undergraduate students of the University of Pretoria, the Potchefstroom and Van- 
derbijlpark campuses of North-West University, and the University of Stellen- 
bosch was tested through the administration of the Test ofacademic literacy levels 
(TALL/TAG). At the University of Pretoria and University of North-West, students 
are allowed to sit for either the English (TALL) or Afrikaans (TAG) test, and so 
have the freedom of choosing whichever language they feel more comfortable 
with in the academic environment. At the University of Stellenbosch, however, 
students have to take both tests. At this university, the English test was adminis- 
tered one day after the Afrikaans test. In total 17,659 students participated (but 
they do not necessarily represent different students: see: Table 1, note); 9,449 took 
the Afrikaans test, while the remaining 8,210 students participated in the English 
version. See Table 1 for a detailed description. 


The tests: TALL and TAG, and their design 
The 2005 and 2006 versions of TALL and TAG each consists 
of 120 marks, distributed over seven subtests or sections 
(described in Van Dyk & Weideman, 2004a and 2004b, Wei- 
deman, 2006), six of which are in multiple-choice format: 
Section 1: Scrambled text 
Section 2: Understanding graphs and visual 
information 

Section 3: Understanding texts 
Section 4: Academic vocabulary 
Section 5: Text types 
Section 6: Text editing 

Section 7: Writing (handwritten; marked and 
scored only for certain borderline cases) 


Table 1: Population offirstyear students 


TALL 

UP 

US* 

NW 

Total 

2005 

3 310 

1 729 

135 

5 174 

2006 

3 652 

3 710 

143 

7 505 

2007 

3 905 

4 165 

140 

8 210 

TAG 

UP 

US* 

NW 

Total 

2005 

2 701 

1 702 

2 521 

6 924 

2006 

2 547 

3 703 

2 650 

8 900 

2007 

2 582 

4 160 

2 707 

9 449 


* Note: Stellenbosch students took the TALL the day after 
they took the TAG. 


130 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


The 2007 versions of TALL and TAG each consists of 100 marks, distributed over 
the first six subtests or sections which are in multiple-choice format. Section 7 was 
omitted from 2007 on; borderline cases, who are identified by statistical means, 
are allowed to take another test, the results of which are used to decide if the 
student has passed or failed the test, and has risk in terms of academic language. 

Students have 60 minutes to complete the test, and they earn a maximum of 
100 marks (some items counting2 or 3 instead of 1). In another paper (Van der Slik 
& Weideman, 2005; cf. too Weideman & Van der Slik, 2007), the determination of 
the cut-off point has been discussed extensively. We will return to the issue of cut- 
off points below, where we will evaluate them in light of the outcomes of the IRT- 
based analyses. 


3. Analyses 

In order to perform ERT analyses, we make use here of the One-Parameter Logistic 
Model (OPLM) package developed by Norman Verhelst and his colleagues at 
CITO in the Netherlands (Verhelst, Glas & Verstralen, 1995). IRT analyses repre- 
sent an ability of persons such as, for example, academic literacy, in a mathemat- 
ical model. In an IRT analysis, the ability is usually denoted by the Greek letter 
theta (0). Persons with a high 0 (ability) are expected to have a high chance to give 
correct responses to difficult items, while persons with low ability are expected to 
have a low likelihood to answer difficult items correctly. The attractiveness of IRT 
modelling- as compared to, for example, Guttman scaling-is that persons who 
get difficult items correct, still have the likelihood to respond incorrectly to less 
difficult items. Similarly, less able persons have a chance to respond correctly to 
difficult items. Guttman scaling does not allow for such "inconsistencies". 

Various mathematical models may be used to represent the characteristics just 
described, but the critically important consideration for choosing the appropriate 
model would, no doubt, be the degree to which it would fit the data. In respect of 
IRT modelling, various fit measures can be employed to evaluate the model against 
the data, but a logistic curve has the most attractive set of features. One of the main 
models in IRT analyses is the Rasch-model. A iess attractive feature of this model, 
however, is that it assumes that all iterris measuring a specific ability have the same 
discrimination index. The One-Parameter Model that we are using in the current 
analysis relaxes this restriction by allowing discrimination indices to vary. It may 
thus represent the data better, since it is well known in classical test theory that test 
items may vary rather considerably as regards their discriminating power. One 
decisive advantage of IRT analyses over classical test theory, however, is that they 
can cope with incomplete designs. That is: the program can deal with different 
persons responding to different sets of items (or tests for that matter). See Figure 1. 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


131 


Items 


12 3 4 


12 13 14 


/ 


\ | j // 

Test 2005 — ^ 

13 14 


\ 


Test 2006 


12 13 14 


k'v 


Test 2007 


... 155 


Figure 1: Persons by items matrix 


In Figure 1, the rows represent persons taking the various tests, and the items are 
represented in the columns. This is a diagrammatic representation of three tests 
which overlap in part, i.e. items can be found in different tests and answered by 
different persons. For example, test item 12 is not just in Test 2005, but is found in 
Test 2007 as well, whereas items 13 and 14 are found in all three tests. Note that, for 
example, items 1 to 4 can only be found in Test 2005. In fact, Figure 1 represents the 
design we are working with in the present study. Note also that, since items may 
occur more than once, the number of unique items is smaller than the sum of the 
test items. In the present situation, 155 unique items are involved, whereas the 
total number of items is 62 + 62 + 63 = 187. 

In case tests are partly overlapping (and therefore linked), the OPLM program 
is able to estimate an ability distribution in which the item parameters can be 
estimated independent of the characteristics of the population. Rather than the 
difficulty of the items, it is the likelihood of correct answers, taking into account 
a person's ability, that matters. As a consequence, the ability distribution can be 
used to equate different tests in such a way that cut-off points for the tests reflect 
equal ability, or, in this case, levels of academic literacy. 


4. Results 

Description ofthe sample 

Table 2 depicts the outcomes at scale level for TALL. As can be seen, there is a 
general trend of decreasing mean scores for the three universities included in this 
study. Simultaneously, the cut-off points were set to a lower level each year. As a 
result, the percentages of students who failed to pass the TALL remained more or 


132 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


Table 2: Descriptive statistics of TALL 


less constant for UE In the case of North-West 
specifically, but also to some extent at the Univer- 
sity of Pretoria, one of the deliberate reasons of- 
fered for a specific annual cut-off point is teach- 
ing capacity (cf. Van der Slik and Weideman, 2005 
and 2007). 

Table 3 provides the outcomes for TAG. It can 
be seen that the general trends observed for TALL 
are also found in TAG. If one looks only at these 
numbers, the academic literacy of newly arrived 
students at the University of Pretoria, the Uni- 
versity of Stellenbosch, and of North-West ap- 
pears to have deteriorated over the years, thus 
providing an affirmati ve answer to the question 
posed in the subtitle of this paper. These outcomes 
can also be visualized as in Figures 2 and 3. 

When Figures 2 and 3 are considered in isola- 
tion, one might be tempted to conclude that 
something is indeed wrong with the academic 
literacy of newly arrived students, since the out- 
comes consistently demonstrate a declining abil- 
ity for the three universities involved, not just 
for the students taking the English test (TALL), 
but also for those who took the Afrikaans test 
(TAG). 

But is this actually what happened? We have 
tested this hypothesis by means of OPLM. As was 
remarked above, we could, unfortunately, per- 
form these analyses only for the Afrikaans test, 
because only these tests were linked, by partial 
overlap, in the manner we have described above. 
We did so by taking the TAG 2005 outcomes as 
the reference or Archimedean point. That is: we 
took mean ability associated with the proportion 
that has passed the test as the starting point for 
each university separately. OPLM has, in other 
words, made it possible for us to show in detail 
how ability scores are associated with test scores in 2005, 2006 and 2007. By trans- 
posing the 2005 mean ability scores onto the 2006 and 2007 test scores, we were 
able to estimate the proportion of students that would have passed these 2006 and 
2007 tests, assuming ability equal to that of 2005. In such an analysis, 2005 is thus 



UP 

US 

NW 

MEAN (range 1-100) 




2005 

71.75 

76.89 

59.70 

2006 

64.32 

68.46 

56.27 

2007 

61.11 

64.98 

50.44 

Cut-off point 




2005 

68.5 

68.5 

67.5 

2006 

55.5 

55.5 

49.5 

2007 

50.5 

57.5 

42.5 

Percentage failed 




2005 

34.26 

22.73 

56.30 

2006 

31.30 

23.02 

34.97 

2007 

31.50 

32.58 

40.00 

Table 3: Descriptive statistics of TAG 



UP 

US 

NW 

MEAN (range: 1-100) 




2005 

70.16 

63.15 

63.08 

2006 

60.18 

53.53 

54.07 

2007 

56.66 

51.78 

51.14 

Cut-off point 




2005 

60.5 

50.5 

55.5 

2006 

50.5 

50.5 

49.5 

2007 

45.5 

42.5 

45.5 

Percentage failed 




2005 

23.84 

26.85 

31.14 

2006 

25.21 

43.78 

40.35 

2007 

24.98 

33.25 

38.27 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


133 


80 

70 

60 

50 

40 

30 

20 

10 

0 



»»•**«.,,, 


* * * * « 


UP 

■ B S NW 

2005 2006 2007 


Figure 2: Mean scores on TALL for 2005(1), 2006(2), and 2007(3) 



accepted as the base year, and the scores of the subsequent years are interpreted 
with reference to the mean scores of the base year. In Figures 4 and 5 we present 
the outcomes. 

It seems quite obvious that Figure 4 leads to a completely different conclusion 
than Figure 3. Instead of a trend of declining ability, no apparent trend can be 
observed! The cause of this is that, instead of there being a declining ability in 
terms of academic literacy for students, the difficulty of the tests has in fact in- 
creased. And this increasing difficulty over the three years has not been fully com- 
pensated for by adjusting the cut-off points. Figure 5 graphically represents this. 


1 34 


Ensovoort: ]aargang 1 1, nommer2, 2007 


80 

70 

60 

50 

40 

30 

20 

10 

0 


— 


UP 

nn us 

III NW 

2005 2006 


2007 


Figure 4: Proportions passed on TAG assuming equal difficulty for 2005(1 ), 
2006(2), and 2007(3) 



Figure 5: Cut-off points assuming equal difficulty for 2005(1), 2006(2), 
and 2007(3) 


As can be seen in Figure 5, the cut-off points for the 2005 TAG test were around 48 
marks. If we intend to measure a mean ability in 2006 and 2007 that is equal to the 
one measured in 2005, then students would have needed fewer marks (between 
42 and 46) to make the cut-off point. Or put differently: If we equalise the tests in 
our analysis over the period in question, by holding their results steady in terms 
of the 2005 starting point, then we note that the 2005 test was in fact easier than the 
2006 and 2007 tests, since a student who had scored around 48 marks on the 2005 
test would have required a lower score (of between 42 and 46) on the 2006 and 


Ensovoort: jaargang 1 1 , niommer 2, 2007 


135 


2007 test in order to make the cut-off point. This implies that the 2005 test was 
easier than the 2006 and 2007 TAG tests. 

The differences between the cut-off points of the three universities involved 
have to do with several factors. It can be seen, for example, that the drop off 
between 2005 and 2006 for North-West students is steeper than for Stellenbosch 
students. This may look odd, since they took identical tests; so the drop off may be 
expected to be of the same magnitude. Two comments can be made about this. 
Firstly, the observed differences may in part be coincidental, resulting from meas- 
urement error. Secondly, from the point of view of testing academic literacy lev- 
els, it may be that the students from North-West come from a population that is 
different from the population that the students of Pretoria and Stellenbosch are 
recruited from, in the sense that the universities involved may be employing 
different entry requirements. Thus different universities may land up with dif- 
ferences among their respective first year populations that are greater than one 
would expect at first glance. 


5. Conclusion 

In this article we have tested if the academic literacy of newly arrived students at 
three universities has deteriorated over the period 2005-2007. A superficial anal- 
ysis may indeed indicate that this is the case. However, having performed analy- 
ses by means of the One-Parameter Logistic Model package (OPLM), we found 
nothing of this kind. On the contrary, rather than a decrease of academic literacy, 
the analyses have shown that the tests themselves have increased in difficulty. It 
is only when we do not fully compensate for this increased difficulty that it 
appears as if academic literacy has deteriorated. In fact, the academic literacy of 
newly arrived Afrikaans speaking students has proved to be remarkably constant 
over the past three years. 

This study has several limitations which have to be addressed in future re- 
search. First, the time period under study may be too short to arrive at definite 
answers regards the possibility of a decline in academic literacy in South Africa. 
Second, we were only able to analyze the results of the Afrikaans test. This is 
unfortunate, because Afrikaans is taught mainly at formerly privileged schools, 
whereas English is taught at both formerly privileged and at formerly deprived 
and extremely deprived schools. Though the outcomes for the Afrikaans and the 
English test of academic literacy levels at first glance appear to be heading into the 
same direction, most of those with experience in research on language testing will 
be wary to rely solely on these impressions before they can be empirically tested 
and in some way quantitatively verified in a similar fashion. Thirdly, we have 
used only the TAG as an indicator of academic literacy levels. Other, similar tests 
may and should be used to study trends in academic literacy levels. This is not to 


136 


Ensovoort: ]aarganc 1 1, nommer 2, 2007 


imply that the results are due to the unreliability of the test. On the contrary, the 
TAG and TALL tests have proved to be highly reliable over the years. What we 
mean is that other measures than the standard paper-and-pencil tests might also be 
useful. Fourth, an issue that ties in with the second point above: our finding that 
academic literacy levels have remained more or less similar over the years applies 
to this, very specific and fairly select group of students. As one of the reviewers of 
this article has emphasized, we have worked here with a cohort of testees that is 
circumscribed in terms of language. What if the population is more varied in terms 
of first language, and if the ability that we have measured is not as evenly distrib- 
uted across other test populations as we have assumed here? We would therefore 
need to test the kinds of conclusions we have reached here against a larger, and 
perhaps more diverse group. Finally, to take up another point of the same review- 
er, the analysis challenges us as test developers and as users of the results of these 
kinds of tests to be careful about assuming equivalence among different versions of 
tests. When we interpret test results, we should refer to test difficulty, which im- 
plies a measure of comparison that we may not yet have or may not even have 
planned for. We cannot simply interpret such results at face value. 

Notwithstanding these limitations, the outcomes of our analyses have made it 
quite clear, we think, that IRT modelling is a useful tool to get a better under- 
standing of the difficulty of tests and test items. In that sense, it may enable us to 
arrive at responsible decisions that do more justice to those who take the tests, in 
that they are doubly accountable, both in being defensible in a theoretical or 
empirical sense, and in being accountable to a larger public (Weideman, 2006 and 
2007). This is not only helpful for low stakes tests such as TALL and TAG, but also 
of critical importance when the stakes are higher, for example in the case of access 
or admission tests. There should really be no dispute about the condition that the 
likelihood of passing a test should depend on students' ability as expressed in 
terms of a score that captures their level of academic literacy. IRT modelling gives 
us one way of doing justice to the expression of that ability in the measurements 
that are made with similar instruments over time. The exceptions to this condi- 
tion in our context occur generally in the case of tests that are used for access to 
higher education. In such a case a political decision may be taken where, say, 
those belonging to a specific, previously disadvantaged group who perform in 
the top three deciles of a test of academic literacy compared with their peers in 
that same group, may be granted access to university study, even though, in 
comparison with others that do not belong to their group, they may not have 
made it. The groundbreaking work for making such a decision responsibly has 
been done in South Africa by the Alternative Admissions Research Project of the 
University of Cape To wn (cf. Cliff & Yeld, 2006, Cliff, Yeld & Hanslo, 2003, Visser & 
Hanslo, 2005). Though such decisions clearly need to be taken on some political 
cue, and not one based solely on a measure of ability, they would need their own 


Ensovoort: jaargang 1 1, nommer 2, 2007 


137 


arguments to be defensible and justifiable. But even in this case, IRT modelling 
might prove to be a valuable tool for underpinning such arguments. It is conceiv- 
able, for instance, that if there are differences in academic literacy that are associ- 
ated with membership of different cultural or other groups, such differences will 
not remain constant over the years. IRT modelling will also be particularly suita- 
ble to notice and monitor such changes. 


Acknowledgements 

We are indebted to two anonymous reviewers, and to Tobie van Dyk of the University of 
Stellenbosch, for suggesting improvements both to the consistency of the text and to the 
substance of the argument. 


Bibliography 

Cliff, A.F. & Yeld, N. 2006. Test domains and constructs: Academic literacy. In: Griesel, H. (ed.). 
2006: 19-27. Access and entry levcl benchmarks : The Natioml Benchmark Tests project. Pretoria: 
Higher Education South Africa. 

Cliff, A.F., Yeld, N. & Hanslo, M. 2003. Assessing the academic literacy skills of entry-level 
students, using the Placement Test in English for Educational Purposes (PTEEP). Paper 
read at Bi-annual conference of the European Association for Research in Learning and 
Instruction (EARLI), Padova, Italy. 

Rademeyer, A. 2007. SA onderwys stuur op ramp af, maan kenner. Beeld, 17 August: 12-13. 
Shohamy, E. 2001 . The power of tests: A critical perspective on the uses of language tests . Harlow : 
Pearson Education. 

Van der Slik, F. & Weideman, A.J. 2005. The refinement of a test of academic literacy. Per linguam 
21(1): 23-35. 

Van der Slik, F. & Weideman, A.J. 2007. Measures of improvement in academic literacy. 

Submitted to Southem African linguistics and applied language studies. 

Van Dyk, T & Weideman, A. 2004a. Switching constructs: On the selection of an appropriate 
blueprint for academic literacy assessment. SAALT Jourmlfor language teaching 38 (1): 1-13. 
Van Dyk, T. & Weideman, A. 2004b. Finding the right measure: from blueprint to specification 
to item type. SAALT Jourmlfor language teaching. 38 (1): 15-24. 

Verhelst, N.D., Glas, C.A. W & Verstralen, H.H.F.M. 1995. One-parameter logistic model OPLM. 
Arnhem: Cito. 

Visser, A. & Hanslo, M. 2005. Approaches to predictive studies: Possibilities and challenges. 

SouthAfricanjourmlofhighereducation 19 (6): 1160-1176. 

Weideman, A.J. 2003. Assessing and developing academic literacy. Per linguam 19 (1 & 2): 55-65. 
Weideman, A.J. 2006. Transparency and accountability in applied linguistics. SouthernAfrican 
linguistics and applied language studies 24 (1): 71-86. 

Weideman, A.J. 2007. A responsible agenda for applied linguistics: Confessions of a philosopher. 

Keynote address, joint LSSA/SAALA/SAALT 2007 conference. Submitted to Per linguam. 
Weideman, A.J. & Van der Slik, F. 2007. The stability of test design: Measuring difference in 
performance across several administrations of a test of academic literacy. Forthcoming in 
Acta academica. 

Widdowson, H.G. 2005. Applied linguistics, interdisciplinarity, and disparate realities. In: 

Bruthiaux, P, Atkinson, D., Eggington, WG., Grabe, W & Ramathan, V. (eds.). 2005: 12-25. 
Directions in applied linguistics: Essays in honor ofRobert B. Kaplan. Clevedon: Multilingual 
Matters. 


Pragmatic validation of a test of academic literacy at 
tertiary level 

Johann L. van der Walt and H.S. Steyn (Jnr.) 

North-West University (Potchefstroom Campus) 


Pragmatic validation of a test of academic literacy at tertiary level 

Validity is afundamental consideration in language testing. Conceptions of 
validity have undergone a number ofchanges over the past decades, and validity is 
now closely conneded with the interpretation oftest scores. Validity remains an 
abstract concept, however, and can only be accessed though a process of validation. 
This article illustrates an approach to the validation ofa test by postulating a 
number of claims regarding an administration ofan academic literacy test (Toets 
van Akademiese Geletterdheidsvlakke) and presenting various kinds ofevidence to 
investigate these claims. 


1 . Introduction 

It is generally accepted that validity is the central concept in language assess- 
ment. The AERA/APA/NCME (1999) test standards regard it as the most funda- 
mental consideration in developing and evaluating tests. It is a complex concept, 
which has undergone a number of different interpretations. At present it is gen- 
erally acknowledged that validity is contextual, local and specific, pertaining to 
a specific use of a test, i.e. one asks whether the test is valid for this situation. The 
validity of tests is determined though the process of validation, a process of test 
score interpretation, before the results can be used for a particular purpose. In 
order to determine the validity of a test, a validation argument has to be con- 
structed, on the basis of which it can be suggested whether the interpretations 
and uses of the test results are valid. 

The purpose of this article is to discuss current conceptions of validity and 
then illustrate the process of validation by constructing a validation argument for 
a widely used test of academic literacy. We will propose a number of claims and 
illustrate methods with which to investigate these, and use the test of academic 
literacy levels ( Toets van Akademise Geletterdheidsvlakke) administered at the Potch- 
efstroom campus of North-West University in 2007 to illustrate a posteriori valida- 
tion procedures. 


Ensovoort: íaargang 1 1 , nommer 2, 2007 


139 


2. Validity 

The concept of validity is not a fixed one, but has undergone different interpreta- 
tions over the past 50 years. Two main perspectives can be distinguished. 

The first is often called the 'traditional' view, which involves the question of 
whether one measures what one intends to measure. It considers validity to be an 
inherent attribute or characteristic of a test, i.e. a test is valid if it measures what it 
claims tobe measuring (Kelley, 1927; Cattell, 1946; Lado, 1961). Three major types 
of validity are identified: criterion-related validity (including concurrent and 
predictive validity), content-related validity, and the one introduced by Cron- 
bach and Meehl (1955), construct validity. The traditional approach reflects a 
positivistic paradigm, which assumes that a psychologically real construct or 
attribute exists in the minds of the test takers - this implies that if something does 
not exist, it cannot be measured. Variations in the attribute cause variation in test 
scores. Validity is thus based on a causal theory of measurement (Trout, 1999). 
Reliability, the index of measurement consistency, is regarded as distinct from 
validity, and is a necessary condition for validity. 

The second view evolved in the 1980s, and replaced the definition of three 
validities with a single unified view of validity; one which portrays construct 
validity as central component (Chapelle, 1999: 256), and regards content and 
criterion validity as aspects of construct validity (Messick, 1989: 20). Messick's 
(1989) paper provided a seminal although somewhat opaque exposition of this 
view. This view - a more naturalistic, interpretative one - is the most influential 
current theory of validity. It shifted validity from a property of a test to that of test 
score interpretations. Validity is an integrated evaluative judgementbased on 
theoretical and empirical evidence, which supports test score interpretation and 
use (Messick, 1989: 13). It is seen as a unitary but multifaceted concept. Messick 
(1989) introduced his much-quoted progressive matrix - the types of research 
associated with validity, which involve the definition and validation of the con- 
struct, decisions about the individual (involving fairness and relevance) [infer- 
ences], definition of social and cultural values, and real world decisions, e.g. 
admission or placement [uses]. Messick's innovation was the introduction of 
consequential validity, i.e. the social consequences and effects of a test on an 
individual and society. Test consequences thus became a central part of validity. 
Admission or placement decisions can have a major impact on a test taker, and 
therefore aspects such as affect, washback and ethics are considered part of conse- 
quential validity (cf. Fulcher & Davidson, 2007; Madsen, 1983; Hughes, 1989; 
Chapelle, 1999). In addition, the test context (including the environment, such as 
room temperature or seating), which may introduce construct-irrelevant vari- 
ance, can have an impact on the test scores (cf. Fulcher & Davidson, 2007: 25). 
Brown (1996: 188) also mentions administration procedures and the environment 
of test administration as relevant factors. Weir (2005: 51) includes test taker charac- 


140 


Ensovoort: jaargang 1 1, nommer 2, 2007 


teristics (e.g. emotional state, concentration, familiarity with the test task types) as 
factors that may influence test performance, and ultimately affect the validity of 
the test scores. 

This incorporation of a social dimension into validity, which can be interpret- 
ed fairly broadly, has not been without controversy, as many critics argue that 
validity does not involve decision-based interpretations, but only descriptive 
interpretations. But the psychometric tradition of language testing has obscured 
the role and effect of language testing in society, especially its sorting and gate- 
keeping roles, which ultimately depend on the policies and values that underlie 
any test. The practice of decision-based interpretations has now become part of 
validity, although as yet there is no coherent theory of the social context in cur- 
rent validity theory. 

The current interpretive paradigm thus allows a variety of data to inform test 
validity. In essence, validity is the truth-value of a test. But the question is: what 
is real or true? Ir uth remains a relative concept, a question of judgement, a matter 
of degree, subject to new or more relevant evidence. There is no such thing as an 
absolute answer to the validity question (Fulcher & Davidson, 2007: 18). This 
view allows every important test-related issue to be considered as relevant to the 
validity concept integrated under a single header. Validity therefore involves a 
chain of inferences. Any construct has to be empirically verifiable, and validity 
claims depend on the evidence provided to support it. Fulcher (1997) emphasises 
the fact that validity is a relative and local affair. The argument for local validity is 
not current -it was advanced by Lado (1961) and Tyler (1963) decades ago. Weide- 
man (2006a: 83) also stresses that the social dimension is unique to each test ad- 
ministration. Tests are valid for a specific use, but determining validity is an 
ongoing and continual process (Davies & Elder, 2005). 

This second view of validity gained prominence in language testing in the 
1990s, when Bachman (1990) introduced Messick's ideas to language testingre- 
search. The idea of validity as a unitary concept is now accepted by most re- 
searchers. Construct validity is generally regarded as the overarching validity 
concept, but there is still variation in the use of terminology and the sub-types of 
validity proposed. Bachman and Palmer (1996) introduced the overarching con- 
cept of test usefulness, but Bachman (2005) later returned to validity as metanarra- 
tive. Weir (2005: 14) also proposed the re-introduction of validity as superordi- 
nate category, and postulated the subcategories of context validity, theory-based 
validity, scoring validity and external validity. McNamara (2003: 470) points out 
that the social dimension of validity is now a "prime topic" in language testing 
debates (cf. McNamara & Roever, 2006). 

In the second view, reliability is no longer regarded as a separate quality of a 
test, but is part of overall validity. Weir (2005: 14) says: "... the traditional polari- 
sation of reliability and validity ... is unhelpful and reliability would be better 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


141 


regarded as one form of validity evidence". Most researchers still regard reliabil- 
ity as important, as in principle a test cannot be valid unless it is reliable (it can be 
reliable but invalid) (Alderson, Clapham & Wall, 2005: 187). 

The second view of validity is, of course, not without its critics. One of the 
reasons is that it seems natural and instinctive to consider validity to be a feature 
of a test. Borsboom, Mellenburg and Van Heerden (2004: 3) say: " . . . we think that 
the argument that shifted the meaning of validity from tests to score interpreta- 
tions is erroneous". They argue that there is no reason why one would conclude 
that the term validity can only be applied to test score interpretations. They pro- 
pose a return to the traditional view (e.g. Kelley, 1927: 14), which states that a test 
is valid if it measures what it purports to measure, even though one can only 
validate interpretations. They argue that current accounts of validity only super- 
ficially address theories of measurement. Fulcher and Davidson (2007: 279) also 
ask: "Has this validity-as-interpretation mantra perhaps become over-used? If a 
test is typically used for the same inferential decisions, over and over again, and 
if there is no evidence that it is being used for the wrong decisions, could we not 
speak to the validity of that particular test - as a characteristic of it? Or must we be 
on constant guard for misuse of all tests?" 

The view of validity as interpretation is now widely accepted. But it is de- 
pendent on test results being used for the purpose for which the test is designed. 
Score interpretation must therefore be valid. Various factors can affect the inter- 
pretation, including extemal factors, as we have seen. Sufficient evidence allows 
a conclusion about overall test quality - its validity. It starts as local affair, with 
repeated use of a test for one purpose only, and ultimately one can argue that 
validity becomes a property of the test, i.e. that it tests what it purports to test; that 
it tests a property that exists and can be measured. 


3. Validation 

Validity can only be accessed through validation. Validity, in Messick's (1989) 
terms, remains an abstract concept, and validation is the process of operationaliz- 
ing the construct of validity. It is an activity: the collection of all possible test- 
related activities from multiple sources. The validation process therefore involves 
the accumulation of evidence to support the proposed test score interpretations 
and uses (Lane, 1999: 1). The process is based on Kane's (1992) systematic ap- 
proach to thinking through the process of validation. Kane sees validation as the 
construction of "an interpretative argument"; a number of inferences following 
each other, ending in decisions made about a candidate. 

Davies and Elder (2005: 804) point out that it is not easy to operationalize 
Messick's (1989) and Bachman's (1990) intricate conception of validity. McNama- 
ra and Roever (2006: 33) also refer to a "decade and more of grappling" with this 


142 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


complex validity framework. Bachman (2005: 267), in an attempt to make test 
validation a manageable process, suggests the following procedure: 

• Articulating a validation argument, which provides the logical framework 
linking test performance to an intended interpretation and use. 

• Collecting relevant evidence in support of the intended interpretation 
and use. 

Evidence collected may include what Davies and Elder (2005: 798) call the "usual 
suspects" of content, construct and criterion validity, as well as reliability. But 
additional sources of validity evidence are also allowed (Davies & Elder, 2005: 
801), mostly as part of the social dimension of testing (consequential validity 
interpreted broadly), such as student feedback, test consequences, ethics, social 
responsibility, washback, affect and impact of test scores. But there is not always 
a prindpled way of combining all the elements that can be regarded as validation 
evidence. Fulcher and Davidson (2007: 18) thus speak of a pragmatic approach to 
validity; an approach that "best explains the facts available". 

The validation process involves the development of a coherent validity argu- 
ment for and against proposed test score interpretations and uses. It takes the 
form of claims or hypotheses (with implied counterclaims) plus relevant evi- 
dence. But we must also examine potential threats to the validity of score inter- 
pretation. Kane, Crooks and Cohen (1999: 15) point out that "the most attention 
should be given to the weakest part of the interpretative argument because the 
overall argument is only as strong as its weakest link". Validation is therefore as 
much a process of raising doubts as of positive assertion. 

What constitutes an adequate argument? Fulcher and Davidson (2007: 20) 
suggest the following basic criteria: 

• Simplidty: explain the facts in as simple a manner as possible. 

• Coherence: an argumen t must be in keeping with what we already know. 

• Testability: the argument must allow us to make predictions about future 
actions or relationships between variables that we could test. 

• Comprehensiveness: as little as possible must be lef t unexplained. 

We now illustrate an approach to test validation by analysing the January 2007 
TAG test administration and results. As the administration of the test, with the 
interpretation and use of the results, is an expensive and important exercise for its 
stakeholders (university management, students, parents and lecturers), the valid- 
ity of the test is of major importance. Davies and Elder (2005: 802-3) report that 
relatively few comprehensive validation studies have been undertaken, and this 
article is an attempt to make a contribution in this regard. 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


143 


4. TAG test: Validation claims and relevant evidence 

The TAG test 


The Toets van Akademiese Geletterdheidsvlakke 1 (Test of Academic Literacy 
Levels), or TAG, was administered to all first-year students at the Potchef- 
stroom campus at the beginning of 2007. It was aimed at establishing whether 
these students possessed the necessary academic literacy skills to succeed 
in their content subjects. It was a medium to high stake test, as students 
who failed had to enrol for a course in Academic Literacy, for which par- 
ents must then pay an extra fee. TAG is a short test of 55 minutes, in multi- 
ple-choice format. The 2007 test, in Afrikaans, contained 63 items, and was 
administered to 2773 students. 

The test content was based on a number of components that make up the 
construct of academic literacy. These are described in Van Dyk and Weideman 
(2004) and Weideman (2007). It is assumed that these components, taken together, 
constitute the construct 'academic literacy'. We accept this definition of the con- 
struct as valid for the purposes of this a posteriori analysis. 

The test was divided into sections that tested the following: Placing five scram- 
bled sentences into the correct sequence; interpreting a graph (a histogram); 
answering comprehension questions on a reading passage; deciding which phrase 
or sentence has been left out in a text; defining academic vocabulary items; iden- 
tifying text types; and deciding where a word had been left out in a text, and 
which word had been left out. 

Each inference in a validity argument is based on an as- 
sumption or claim that requires support (Bachman, 2005: 264; 

Lane, 1999: 1; Chapelle, 1999: 259). In our validation study of 
the TAG test, we constructed a number of claims and collected 
evidence to support these claims. 


Validation evidence 

Claim 1 : The test is reliable and provides a consistent measure, 
with small variance the result of measurement error. 

A completely reliable test implies that tests scores are free from 
errors and can be depended on for making decisions about 
placement or admission. The use of intemal consistency coef- 
ficients to estimate the extent of the reliability of objective- 
format tests is the industry standard. Reliability coefficients do not provide evi- 
dence of test quality as such: the estimated reliability is "not a feature of the test, 
but rather of a particular administration of the test to a given group of examinees" 
(Weir, 2005: 30). The internal consistency for each section and for the whole test 
was determined by calculating the Cronbach alpha coefficients. The results are 
displayed in Table 1. 


The TAG test construct and 
results of numerous 
administrations of different 
versions of the test have 
been discussed in a number 
of publications, such as Van 
Dyk and Weideman (2004), 
Van der SLik and Weideman 
(2005), Weideman (2005), 
Weideman (2006a & b) and 
Weideman (2007). 


Table 1: 

Cronbach alpha coefficients 


Alpha 

No. of items 

Section 1 

0,84 

5 

Section 2 

0,64 

7 

Section 3 

0,70 

22 

Section 4 

0,62 

9 

Section 5 

0,67 

5 

Section 6 

0,89 

15 

TAG 

0,88 

63 


144 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


0 

| Measr | + Student | -TAG ítems | 



+ 3 + 


+ 2 + 





+ -4 + 


+ -5 + . 

|Measr | *=29 



One should bear in mind that the alpha coefficient is a func- 
tion of the number of items it is based on. The reliability coeffi- 
cients for Sections 2, 3, 4, and 5 are below the generally accepted 
norm of 0,8 (Weir, 2005: 29), while the alpha for the test as a whole 
is very good at 0,88. 

Claim 2: The general ability ofcandidates matches the general 
level ofdifficulty oftest items. 

An item-response analysis was performed, using the FACETS com- 
puter program (Facets for Windows Version No. 3.61.0). A two- 
facet Rasch model was fitted to data of the 63 items scores of the 
TAG for the 2773 students. In order to validate this model, the 
following diagnostic methods were applied (Hambleton, Swami- 
nathan & Rogers, 1991), using the FACETS program (Facets for 
Windows Version No. 3.61.0) and STATISTICA (StatSoft, Inc., 2006): 

• A plot of the student abilities of the more difficult items 
(upper half of items on the measurement scale) vs. those of 
the easier items (remainder of items) depicted most of the 
points to lie underneath the line of equality. This means 
that the model based on the more difficult items predicts a 
lower ability for a student than a model based on the easier 
items. 

• A plot of the student abilities of the odd-numbered items 
vs. those of the even-numbered displays points that are 
evenly distributed along the equality line, with high cor- 
relation (0,83), which means that a student's abilities are 
similar for two sets of items selected from the test. 

• A plot of item difficulties for higher ability students vs. 
those for lower ability students: with a high correlation 
(0,88) and even distribution of the points along the equality 
line, it seems that the difficulties of items are predicted 
similarly for the model fitted on the low and high ability 
groups of students. 

• Comparison of the distributions of the standardised resid- 
uals for the data with that of data simulated from the fitted 
model: since the histograms for the two distributions are 
very similar, the model fitted simulated data in the same 
manner as that of the original data. 


Figure 1: Item-ability map 


We therefore conclude that, on the whole, there was an appropri- 
ate fit of the model on the data. 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


145 


The item-ability map in Figure 1 displays the distributions of the students' 
abilities and the TAG item difficulties, both relative to a logit-scale measure. This 
measure varies from +5 at the top to -5 at the bottom, the larger values indicating 
better student abilities and more difficult items, while lower values indicate poorer 
student abilities and easier items. The map provides estimates of person ability 
and item difficulty. These are expressed in terms of the relation between the 
ability of individual candidates and their relative chances of giving a correct 
response to items of given difficulty; the chances being expressed as logits (Mc- 
Namara, 1996: 200). This map allows comparison of candidate ability and item 
difficulty. 

From this display it is clear that no extreme difficulties occurred, while only a 
very few students had extreme abilities outside the limits -3 and 3. There was no 
significant mismatch; the ability of the candidature was at the general level of 
difficulty of the items, and there was a good fit between student ability and item 
difficulty. 

Claim 3: Infit mean square values of test items fall within an acceptable range. 

By means of item-response analysis, a Rasch model that summarises the observed 
patterning throughout the set of relations between candidates and test items was 
fitted. Here we wish to consider the extent to which the pattern of responses 
observed for individual items conforms to and reinforces the general pattern in 
the model, or goes against it (McNamara, 1996: 169). If the pattern for the individ- 
ual items, allowing for normal variability, fits the overall pattern, the items show 
appropriate 'fit'. If not, they are 'misfitting' or 'overfitting' items (McNamara, 1996: 
169-175). The fit statistics in Table 2 give tlie difficulty levels of items as measured 
on the logit-scale, ordered from the most difficult item (no. 21 with measure 1,77) 
to the easiest one (item 1 with measure -2,2 7), together with the fit statistic 'infit 
mean square'. 

McNamara (1996: 172) points out that infit statistics are informative as they 
focus on the degree of fit in the most typical observations in the model. He states 
that infit mean square values have an expected value of 1; individual values will 
be above or below this according to whether the observed values show greater 
variation (resulting in values greater than 1) or less variation (resulting in values 
less than 1) (McNamara, 1996: 172). McNamara (1996: 173) suggests that values in 
the range of 0,75 to 1,3 are acceptable. Values greater than 1,3 show significant 
misfit, i.e. lack of predictability, while values below 0,75 show significant overfit. 

Since the infit mean squares have values that vary between 0,97 and 1,04, all 
items seem to be in accordance with the fitted Rasch model. 


146 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


Table 2: Fit statistics 


Observed % correct 

Measure 

Infit Mean square 

TAG item 

19.8 

1.77 

1.02 

21 

25.5 

1.39 

0.99 

46 

30.9 

1.08 

1.02 

45 

32.3 

1 

1 

61 

32.6 

0.99 

1.01 

50 

34.9 

0.87 

1.01 

9 

37.3 

0.74 

0.99 

20 

37.3 

0.74 

0.99 

32 

37.5 

0.73 

1 

63 

38.2 

0.7 

0.97 

17 

38.2 

0.7 

0.99 

41 

38.7 

0.68 

0.99 

60 

39.9 

0.62 

1 

16 

40.3 

0.6 

1.01 

37 

41.7 

0.53 

1 

23 

41.8 

0.53 

1 

26 

41.8 

0.52 

1.02 

56 

41.9 

0.52 

0.97 

49 

42.8 

0.47 

1 

57 

43.4 

0.45 

1 

62 

43.5 

0.44 

0.98 

42 

43.5 

0.44 

0.99 

15 

45.1 

0.37 

0.98 

35 

46.7 

0.29 

0.97 

34 

47.9 

0.23 

1.02 

8 

50.3 

0.12 

1.04 

3 

50.5 

0.11 

1.01 

40 

50.6 

0.11 

1 

59 

50.6 

0.11 

1.01 

27 

51.8 

0.05 

0.99 

18 

52.3 

0.03 

0.98 

19 

52.7 

0.01 

1 

2 

53.0 

-0.01 

1 

11 

53.0 

-0.01 

0.99 

24 

53.3 

-0.02 

1 

47 

53.5 

-0.03 

0.99 

4 


Ensovoort: jaargang 1 1, nommer 2, 2007 


147 


Observed % correct 

Measure 

Infit Mean square 

TAG item 

53.6 

-0.03 

1 

58 

54.4 

-0.07 

0.99 

31 

54.6 

-0.08 

1.02 

44 

54.9 

-0.09 

1.01 

10 

57.2 

-0.2 

1 

22 

58.6 

-0.27 

1.02 

14 

59.5 

-0.31 

1.01 

43 

61.0 

-0.39 

1 

25 

62.1 

-0.44 

1 

55 

62.1 

-0.44 

0.99 

12 

63.0 

-0.48 

0.99 

54 

63.1 

-0.49 

1 

30 

63.4 

-0.51 

1.02 

51 

65.6 

-0.61 

1.01 

48 

65.8 

-0.62 

0.99 

29 

70.0 

-0.85 

1 

53 

70.6 

-0.88 

1.01 

5 

71.2 

-0.91 

1.01 

28 

71.5 

-0.93 

0.98 

13 

72.7 

-1 

0.99 

52 

74.6 

-1.11 

1 

7 

75.3 

-1.15 

1.02 

39 

75.7 

-1.18 

1.02 

6 

83.1 

-1.69 

1 

36 

88.5 

-2.18 

0.99 

38 

89.4 

-2.27 

1 

1 


Claim 4: The internal correlations of the different test sections satisfy specific 
criteria. 

Bachman (1990: 258) states that patterns of correlations among item scores and 
overall test scores provide evidence of construct validity. Alderson et al. (2005: 
184) indicate that an internal correlation study can be used in order to examine 
this. They point out that the reason for having different test sections is that they 
all measure something different and therefore contribute to the overall picture of 
the attribute. They expect these correlations to be fairly low - possibly in the 


148 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


Table 3: Correlations order 0,3 to 0,5. If two sections (components) 


Section 

1 

2 

3 

4 

5 

6 Total 

correlate very highly with each other (e.g. 

1 






0,294 

0,9), one might wonder whether the two sec- 

2 

0,207 





0,387 

tions are testing different attributes, or 

3 

0,297 

0,378 




0,514 

whether they are testing essentially the same 

4 

0,214 

0,312 

0,495 



0,573 

thing. The correlations between each sec- 

5 

0,141 

0,161 

0,304 

0,358 


0,441 

tion and the whole test, on the other hand. 

6 

0,146 

0,195 

0,282 

0,388 

0,391 

0,406 

might be expected to be higher - possibly 

Total 

0,447 

0,533 

0,763 

0,697 

0,557 

0,737 

around 0,7 or more - since the overall score 


is taken to be a more general measure of the 
attribute than each individual section score. 
Alderson et al. (2005: 184) add: "Obviously if the individual component score is 
included in the total score for the test, then the correlation will be partly between 
the test component and itself, which will artificially inflate the correlation. For 
this reason it is common in internal correlation studies to correlate the test com- 
ponents with the test total minus the component in question". Three different 
types of correlation coefficients can be identified, each with its own criterion: 

• The correlation coefficients between each pair of subtests (Cl). These cor- 
relations should be fairly low, from 0,3 to 0,5 (cf. Hughes, 1989: 160; Alder- 
son et al., 2005: 184; Ito, 2005). 

• The correlation coeffidents between each subtest and whole test (C2). These 
correlations should be 0,7 and more (cf. Alderson et al., 2005: 184; Ito, 2005). 

• The correlation coefficients between each subtest and the whole test mi- 
nus the subtest (C3). These should be lower than those between each subtest 
and the whole test, i.e. C2 > C3 (cf. Ito, 2005). 

The correlational pattern that results is indicated in Table 3. The table indicates 
the following: 

• C1 (shaded areas): only eight of the fifteen correlations meet the criterion, 
with seven lower than 0,3. 

• C2 (last row): only three of the six correlations meet the criterion. 

• C3 (last column): all correlations meet the criterion. 

It must be noted that one weakness inherent in the correlational approach to 
construct validation is that it only evaluates the relevance of those performance 
criteria that are already included, and that it cannot identify others that are rele- 
vant to the construct but which have been omitted (Moritoshi, 2002: 11) 

Claim 5: Each sedion of the TAG test displays construct validity. 

Bachman (1990: 259) indicates that factor analysis is extensively employed in con- 
struct validation studies. The construct validity of each section of the test can be 


Ensovoort: iaargang 1 1 , nommer 2, 2007 


verified by means of principal component 
analysis, a factor analytic model that reduc- 
es data and extracts the principal compo- 
nents or factors that underlie the construct 
being assessed. The results obtained by 
means of STATISTICA (StatSoft, Inc., 2006) 
are displayed in Table 4. 

Only Sections 1 and 5 formed one con- 
struct, while sections 2 and 6 can be split up 
into two constructs. In this regard, a princi- 
pal factor analysis with an oblique (OBLIM- 
IN) rotation was performed. For section 2 
the resultant factor pattem gives items 6-9, 

12 as first sub-factor, while items 10 and 11 
belong to the second sub-factor, with a cor- 
relation of -0,57 between the sub-factors. In 
the case of section 6, the sub-factors are 
formed by items 56 to 63 and 49 to 55 respec- 
tively, with a correlation of -0,64 between 
sub-factors. Sections 3 and 4 are not con- 
struct valid. To be construct valid, as few as 
possible factors that explain the maximum 
percentage of variance are required, with 
communalities as high as possible. (In Sec- 
tion 4, only 37% of the variation is explained 
by the two factors.) 

Claim 6: Thefirst principal component dominates the whole test. 

In Figure 2, the eigenvalues for each principal component are plotted in a simple 
scree plot line (cf. Cattell, 1966) in their sizes as a diminishing series. The percent- 
age variance explained by the first component was 13,5% relative to 6,2%, 3,9%, 
3,0% of the second, third and fourth components respectively. The first compo- 
nent is therefore not as dominant as one should ideally wish. 

Claim 7: Test takers werefam iliar with the demands of the test. 

A questionnaire aimed at establishinghow first-year students experienced the test 
was completed by a group of 754 students from eight faculties during the first week 
of classes. Because of practical constraints, this group was not randomly selected, 
but formed an availability sample. The findings cannot be generalised to the whole 
population, but provide an indication of how students felt about the whole test 
procedure. Claims 7 to 10 are based on the findings of this questionnaire. 


149 

Table 4: Factor analysis results 

Section Number of Percentage Communalities 
components variance 
explained 

1 
2 

3 

4 

5 

6 


10 
9 
8 
7 
6 

4 
3 
2 
1 
0 

0 2 4 6 8 10 12 14 lt> 18 20 

Number cf eigenvoiues 

Figure 2: Scree plot 


1 

61 

0.18 - 0.81 

2 

71 

0.25 - 0.68 

6 

39 

0.18 - 0.70 

2 

37 

0.20 - 0.59 

1 

43 

0.37 - 0.48 

2 

50 

0.15-0.66 


\ 


1 50 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


No response 


Other 


Table 5: Where did you first hear about the test? 


From friends 


When I enrolled 


In the first-year guide 
From the residence committee 


Percentage 


20 


42 


13 


16 


7 


2 


Weir (2005: 54) points out that candidates should 
be familiar with the task type before sitting the test 
proper, as the degree of the candidate's familiarity 
with the demands of a test may affect the way the 
task is dealt with. He states that specimen past pa- 
pers and clear specifications should help difficul- 
ties in this respect, and that an exemplification of 
tasks and procedures should be readily available. 


Students were informed about the TAG test 
when they enrolled at the university, and full in- 


formation was provided in a first-year guide that was sent to them in the November 
preceding their arrival towards the end of January. No information was provided 
on the university website. Table 5 indicates where students first heard about the 
test 

Most of these students read about the test in the first-year guide. The guide 
informed students that they could look at a specimen test on the website of the 
University of Pretoria. Only 11 percent looked at this example test. Of these, only 
7 percent did the practice test. Seventy-five percent who did not look at the 
specimen test indicated that they would have liked to do so. It therefore seems as 
if the test is not as transparent as it could be at the Potchefstroom campus, and that 
much more could be done to inform students about the test and its format. 

Claim 8: The circumstances under which the test is administered are conducive to 


valid scores being obtained. 


The test was written on the fourth day after students' arrival on campus. From the 
second day, they are subjected to an Orientation and Information week under the 
supervision of the residence committees. This involves a full programme, and 
students are kept busy the whole day (and part of the night!). 

We asked students whether they felt they could deliver their best perform- 
ance in the test: 65 percent indicated that they could not. The reasons they indi- 
cated were as follows: tired (40%), sleepy (21%), stressed (25%), ill (7%) and other 
reasons (7%). Sixty-three percent of the students went to bed after midnight, 
while 57 percent reported that they had to get up before 6:00 on the morning of 
the test. It seems as if the circumstances in which the test is administered are not 
ideal, and these could affect the validity of the test results. 

Claim 9: Students experience the test as relevant to their studies. 

Forty-five percent of the students had a good idea of the purpose of the test; thirty- 
four percent had a vague idea, while 21 percent did not know what the purpose 
was. The test results were indicated in terms of codes ranging from 1 to 5, indicat- 
ing to what extent they were at risk in their studies. Students whose results fell 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


151 


within the codes 1 to 3 must enrol for the course in Academic Literacy. Twenty- 
nine percent of the respondents had to take the course. Sixty-nine percent of those 
who had to take the course declared that they would do so under protest. 

Claim 10: Students found the test experience agreeable. 

Eleven percent of the students said they found all the questions clear, 68% knew 
what they had to do at most questions, 19% at some of the questions, while 1% did 
not understand any question (1% gave no reply). We also asked the students what 
they thought about the length of the test. Fourteen percent could finish the test, 
39% could finish but had to work fast, while 47% reported that they could not 
finish the test. This is somewhat disconcerting, as most responses should ideally 
be in the middle category. Fifty-eight percent felt that the test was too long to 
finish within the allocated time. 


5. Condusion 

A problem in the current conception of validation studies is the balance between 
theoretical rigour and manageability - this remains a challenge for validation 
research. As a result, a pragmatic stance is often adopted, as was the case here. The 
framework employed in this validation study includes statistical procedures, 
based on both classical and item-response theory, as well as a sodal dimension, in 
the form of student feedback. A variety of evidence was collected, providing a 
profile of the test results and its administration. It is obvious that validity is a 
multifaceted concept, and that many factors together play a role in the validation 
of a test. Each claim that is formulated contributes to an aspect of the validity of 
the test. The aim of the article was to illustrate a method of doing a validation 
study, and it is clear that the conclusions made depend on a judgement and 
interpretation of the results obtained. 

It must be stressed that there is no such thing as a perfect test, as is no test 
situation. This is why provision is usually made for the misclassification of candi- 
dates when tests results are analysed. But the TAG test investigated here performs 
very well. It is used for a specific, clearly defined purpose. Its reliability is good, 
and there is a good fit between student ability and item difficulty. The internal 
correlations are probably as good as can be expected. More than one underlying 
trait (or factor) was extracted, and the first was not as dominant as expected. This 
may be due to the fact that academic literacy is a rich and multidimensional 
construct (cf. Van der Slik & Weideman, 2005). It is also clear that much more can 
be done to improve the administration of the test, such as informing students 
better, explaining the relevance of the test, and ensuring that students can deliver 
their best performance in the test. The problems students had with the length of 
the test also warrant further investigation. 


152 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


We investigated the validity of only one administration of the TAG test here. If 
it proves to be valid in most respects over number of administrations, and re- 
mains to be used for its specific purpose, the test itself can come to be regarded as 
a valid instrument, in terms of the traditional interpretation of validity. Valida- 
tion thus remains an ongoing process. 


Bibliography 

AERA/APA/NCME. See American Educational Research Association. 

Alderson, J.C., Clapham, C. & Wall, D. 2005. Language test construction and evaluation. Cam- 
bridge: Cambridge University Press. 

American Educational Research Association, American Psychological Association, National 
Council for Measurement in Education. 1999. Standardsfor educational and psychological 
testing. Washington, DC: Author. 

Bachman, L.F. 1990. Fundamental considerations in language testing. Oxford: Oxford University Press. 

Bachman, L.F. 2005. Statistical analysesfor language assessment. Cambridge: Cambridge University 
Press. 

Bachman, L.F. & Palmer, A. 1996. Language testing in pradice. Oxford: Oxford University Press. 

Borsboom, D., Mellenbergh, G.J. & Van Heerden J. 2004. The concept of validity. Psychological 
review 111 (4); 1061-1071. 

Brown, J.D. 1996. Testing in languageprograms. Upper Saddle River, NJ: Prentice Hall Regents. 

Cattell, R.B. 1946. Description and measurement ofpersonality. New York: World BookCompany. 

Cattell, R.B. 1966. The scree test for the number of factors. Multivariate behavioural research 1 : 245- 
276. 

Chapelle, C.A. 1999. Validity in language assessment. Annual reviewofappliedlinguistics 19: 254- 
272. 

Cronbach, C.J. & Meehl, PE. 1955. Construct validity in psychological tests. Psychological bulletin 
52:281-302. 

Davies, A. & Elder, C. 2005. Validity and validation in language testing. In Hinkel, E. (ed.) 2005: 
795-813. 

Facets for Windows Version No. 3.61.0. Copyright © 1987-2006, John M. Linacre. 

Fulcher, G. 1997. An English language placement test: Issues in reliability and validity. Language 
testing 14(2): 113-138. 

Fulcher, G. & Davidson, F. 2007. Language testing and assessment: an advanced resource book. 
Abingdon, Oxon: Routledge. 

Hambleton, R.K., Swaminathan, H. & Rogers, H.J. 1991. Fundamentals ofitem response theory, 
volume 2. Newbury Park: Sage Publicatáons. 

Hinkel, E. (ed.). Handbookofresearch in second language teachingand leaming. Mahwah, New 
Jersey: Lawrence Erlbaum. 

Hughes, A. 1989. Testingforlanguage teachers. Cambridge: Cambridge University Press. 

Ito, A. 2005. A validation study on the English language test in a Japanese nationwide universi- 
ty entrance examination. Asian EFL journal 7(2), Article 6. [Online]. Available at http:// 
www.asian-efl-journal.com/june_05_ai.pdf . Accessed on 20 July 2007. 

Kane, M.T., Crooks, T & Cohen, A. 1999. Validating measures of performance. Educational 
measurement: issues and practice 18(2): 5-17. 

Kane, M.T 1992. An argument-based approach to validity. Psychological bulletin 112: 527-535. 

Kelley. TL. 1927. Interpretation of educational measurements. New York: Macmillan. 

Lado, R. 1961 . Language testing: the construction and use offoreign language tests. New York: 
McGraw-Hill. 

Lane, S. 1999. Validity evidencefor assessments. [Online]. Available at http://nciea.org/publications/ 
ValidityEvidence_Lane99.pdf. Accessed on 21 May 2007. 

Lepota, B. & Geldenhuys, J. 2005. 25 years ofapplied linguistics in Southem Africa: Themes and trends 
in Southem African linguistics. Pretoria: University of Pretoria. 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


153 


Linn, R.L. (ed.) 1989. Educational measurement. New York: Macmillan. 

Madsen, H.S. 1983. Techniques in testing. Oxford: Oxford University Press. 

McNamara. T F. 19%. Measuring second languageperformance. London: Longman. 

McNamara. T. F. 2003. Lookingback, looking forward: Rethinking Bachman. Language testing 
20(4): 466-473. 

McNamara, T.F. & Roever, C. 2006. Language testing: The social dimension. Oxford: Blackwell. 

Messick, S. 1989. Validity. In: Linn, R.L. (ed.). 1989: 13-103. 

Moritoshi, TE 2002. Validation ofthe test ofEnglish conversation proficiency. MA dissertation. 
University of Birmingham. [Online] Available at http://www.cels.bham.ac.uk/resources/ 
essays/Mori toshiDiss.pdf. Accessed on 12 May 2007. 

Newton-Smith, WH. (ed.). 1999. A companion to the philosophy ofscience. Oxford: Blackwell. 

StatSoft, Inc. (2006). STATISTICA (data analysis software system), version 7.1. www.statsoft. com 

Trout, J.D. 1999. Measurement. In: Newton-Smith, WH.(ed.). 1999: 265-276. 

Tyler, L. 1963. Tests and measurement. Englewood Cliffs, NJ: Prentice-Hall. 

Van Dyk, T. & Weideman, A.J. 2004. Switching constructs: On the selection of an appropriate 
blueprint for academic literacy assessment Joumalfor langmge teaching 38(1): 1-13. 

Van der Slik, F. & Weideman, A.J. 2005. The refinement of a test of academic literacy. Per linguam 
21(1): 23-35. 

Weideman, A.J. 2005. Integrity and accountability in applied linguistics. In: Lepota, B. & 
Geldenhuys, J. (eds.). 2005: 174-197. 

Weideman, A.J. 2006a. Transparency and accountability in applied linguistics. Southern African 
linguistics and applied language studies 24(1): 71-86. 

Weideman, A.J. 2006b. Assessing academic literacy in a task-based approach. Language matters 
37(1); 81-101. 

Weideman, A. 2007. Academic literacy: prepare to learn. Pretoria: Van Schaik. 

Weir, C.J. 2005. Language testingand validation. Houndmills, Basingstoke: Palgrave Macmillan. 


Van bevreemdende diskoers tot toegangsportaal: e-Leer as 
aanvullingtot 'n akademiese geletterdheidskursus 


T.J. van Dyk, L. van Dyk, H.C. Blanckenberg en J. Blanckenberg 
Universiteit Stellenbosch 


From alienating discourse to access discourse: e-Learning as supplement to 
an academic literacy course 

Low levels ofskill in the language of teaching and learning is widely considered 
one ofthe main reasonsfor the lack of academic success among South African (and 
intemational) undergraduate students. lnfact, students are also mcreasingly 
experiencing academic discourse in their mother tongue as a "foreign language". It 
is an unsettling trend which is confirmed by local and international literature, 
and can possibly be attributed to the language curriculum in secondary education 
that does not adequately prepare studentsfor the higher order language-thinking 
skills that they needfor study at university. During the pastfew years various 
local and international tertiary education institutions have started implementing 
academic literacy programmes for all first-year students to address this under- 
preparedness. Two problems, however, have a significant influence on the effedive 
teaching ofthese language-thinking skills. Firstly, faculties can hardly sacrifice 
two contact sessions per week to these types ofsupport courses because oftheir own 
overfull programmes. Secondly, there are only a limited number oflanguage 
praditioners available for an increasing number ofstudents. As a possible 
solution, theSU Language Centre has launched an e-learning project within 
which a supplementary course-related e-learning modulefor an existingfirst-year 
reading course has been developed. From the quantitative and qualitative research 
results insightful conclusions can be drawn. Examples are: (i) the extent to which 
e-learning can supplement lecture time, while simultaneously empowering 
students academically; and (ii) the effediveness of including in the curriculum e- 
learning task types that facilitate the acquisition ofcertain higher order language- 
thinking skills. 


1. Inleiding 

Universiteite ervaar deesdae enersyds druk om student-dosent-ratio's te verhoog 
aangesien daar beperkte finansiële hulpbronne beskikbaar is, maar andersyds is 
daar ook druk om meer steun en onderrigtyd te wy aan die ontwikkeling van 
studente se kennis en vaardighede (onder andere met betrekking tot akademiese 
geletterdheid). 'n Balans moet derhalwe gevind word tussen die finansiële le- 


Ensovoort: jaargang 1 1, nommer 2, 2007 


155 


wensvatbaarheid van universiteite en hul verantwoordelikheid om die nodige 
hulpbronne hiervoor beskikbaar te stel. In hierdie opsigis 'n loodsprojekby die 
Universiteit Stellenbosch (US) van stapel gestuur om te bepaaal of kontakonder- 
rig, binne konteks van 'n akademiese geletterdheidsprogram, met e-leer aangevul 
kan word en indien wel, in welke mate en in welke vorm. 

Daar word in hierdie artikel eerstens 'n breë agtergrond geskets waarteen die 
probleem beskou moet word. Faktore wat aanleiding gegee het tot die noodsaak- 
likheid om alternatiewe tot kontakonderrig te ondersoek, word daama bespreek. 
Die potensiaal van e-leer as so 'n altematief word belig waama die aanpak van die 
loodsprojek, om die haalbaarheid hiervan te ondersoek, beskryf word. Die res 
van die artikel word gewy aan die bevindinge en gevolgtrekkings wat spruit uit 
kwalitatiewe en kwantitatiewe terugvoer van studente wat deelgeneem het aan 
die projek. 


2. Agtergrond 

Op grond van data van die Suid-Afrikaanse Nasionale Departement van Onder- 
wys wys Mamaila (2001 :1) daarop dat 25% van eerstejaarstudente wat vir univer- 
siteitstudie ingeskryf is, reeds vóór die voltooiing van die eerste studiejaar die 
handdoek ingooi. Die situasie bly kommerwekkend in die daaropvolgende jare 
van die voorgraadse program. Tussen 30% en 40% van eerstejaarstudente voltooi 
byvoorbeeld nie hul voorgraadse studies in die aanbevole tydperk nie. Deur- 
vloeikoerse aan hoëronderwysinstellings is dus laag en verteenwoordig'n finan- 
siële verlies van bykans Rl,8 miljard per jaar. 

Daar is verskeie redes vir hierdie situasie, maar lae vlakke van vaardigheid in 
die taal van onderrig en leer word as een van die hoofredes vir 'n gebrek aan 
akademiese sukses onder Suid-Afrikaanse voorgraadse studente beskou - selfs 
onder diegene met groot akademiese potensiaal (Blacquiére, 1989; Leibowitz, 2001; 
Macdonald, 1990; Perkins, 1991; Pretorius, 1995; Van Rensburg & Weideman, 2002; 
Vorster & Reagan, 1990). Navorsing deur onafhanklike onderwysspesialiste beves- 
tig verder 'n ingrypende daling in die (akademiese) taalvaardigheidsvlak van 
studente wat vir tersiêre studie inskryf (Natal Witness, 2004:8; Rademeyer, 2005a:40). 
Dit word verder bevestig deur die uitslae van akademiese geletterdheidstoetse 
wat deesdae by verskeie universiteite as plasings- en selfs toegangsmeganismes 
gebruik word. 'n Vanselfsprekende gevolg van die genoemde lae akademiese 
geletterdheidsvlakke is dat sodanige studente moeilik met voorgeskrewe akade- 
miese materiaal omgaan. 

Die toelating van studente wat ondervoorbereid is ten opsigte van akade- 
miese taalvaardigheid (Zamel & Spack, 1998), blyk nie net 'n plaaslike verskynsel 
te wees nie (Butler & Van Dyk, 2004:1). Pierce (2003) doen byvoorbeeld verslag oor 
navorsing wat daarop dui dat slegs 6% van eerstejaarstudente aan universiteite in 


156 


Ensovoort: jaargang 1 1, nommer 2, 2007 


die Verenigde State van Amerika (VSA) gereed is om selfstandig uit hulle hand- 
boeke te leer. Dié toedrag van sake verswak selfs verder, aangesien diegene wat 
goed kan skryf slegs 2% van die totaal uitmaak, terwyl minder as 'n derde (31%) 
oor voldoende skryfvermoë beskik. 'n Studie wat McKenzie en Schweitzer (2001) 
onder Australiese studente uitgevoer het, bevestig hierdie bevindinge en beklem- 
toon ook dat akademiese taalvaardigheid akademiese sukses beïnvloed. 

Verskeie redes word plaaslik vir akademiese mislukking aangevoer (waarvan 
daar vir doeleindes van hierdie artikel slegs kortliks na twee verwys sal word). Die 
eerste is die politieke geskiedenis van apartheid en die gevolglike ongelyke verde- 
ling van hulpbronne in die Suid-Afrikaanse onderwysstelsel, wat 'n groot groep 
studente negatief geraak het (Butler & Van Dyk, 2004:1). Swak voorbereidheid is 
egter 'n verskynsel wat nie meer net onder histories benadeelde studente te be- 
speur is nie. 'n Toenemende aantal tradisioneel wit, sogenaamd bevoorregte stu- 
dente staar ook akademiese mislukking in die gesig. Trouens, ook moedertaalstu- 
dente ervaar toenemend die akademiese diskoers in hul moedertaal as bevreemdend. 

'n Verdere rede vir lae akademiese geletterdheidsvlakke kan wees dat die Suid- 
Afrikaanse onderwysstelsel in die verlede sterk leerplangedrewe en positivisties 
was. Taal is gesien as 'n versameling diskrete elemente wat geleer en nie verwerf 
hoef te word nie. Binne hierdie stelsel 

• is kennis as universeel, onveranderlik, konteksvry en neutraal beskou; 

• was die onderwyser die enigste inligtingsbron, en is sy/haar sienings as 
beste kundige nooitbevraagteken nie; 

• was persoonlike ervaring en kultuur, of die bekende werklikheid, die enig- 
ste geldige verwysingsraamwerk; 

• het die klem op linkerbreinaktiwiteite grootliks tot 'n liniêre denkwyse 
aanleiding gegee; en 

• was begrip ongedifferensieerd, met die gevolglike ervaring van alle in- 
ligting as ewe belangrik. 

(Bencze, 2005; Blanckenberg, 1999:34) 

Tans is onderwysers vasgevang tussen die positivistiese patrone van die verlede 
en die hedendaagse konstruktivistiese benadering tot onderrig en leer (Blancken- 
berg, 1999:41). Onderwysers het oor die algemeen nog nie die skuif na die kon- 
struktivistiese benadering, wat die grondslag van nuwe verwikkelinge soos 
uitkomsgebaseerde onderwys (UBO) vorm, gemaak nie. Hiervoor bestaan daar 
etlike ingewikkelde redes (wat vir die doeleindes van hierdie artikel nie bespreek 
sal word nie), maar waarvan 'n hoofrede moontlik 'n gebrek aan opleiding en 
hulpbronne is om so 'n skuif moontlik te maak. Op die oomblik weifel onderwys- 
ers dus tussen die ou en die nuwe benadering tot leer en onderrig. 

Rademeyer (2005a:41) stel die ironiese moontlikheid dat studente as gevolg 
van UBO leerprobleme in die tersiêre leeromgewing kan ondervind aangesien 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


157 


die doel daarvan is om leerders vir die werklike lewe (outentieke situasies) en nie 
noodwendig vir universiteitstudie voor te berei nie. Leerders ontwikkel dus nie 
meer noodwendig die vermoë om as selfstandige, strategiese, besinnende den- 
kers op te tree nie, en die ontwikkeling van hierdie vermoë is 'n basiese, nood- 
saaklikheid vir enige universiteitstudent. Gevolglik sukkel universiteitstudente 
(veral eerstejaars) onder andere daarmee om taakopdragte, toetsvrae en akade- 
miese tekste met 'n redelik gesofistikeerde argument en abstrakte woordeskat 
strategies te lees en te verstaan. Dit is dus duidelik dat die taalkurrikulum in die 
sekondêre onderwys leerders nie toereikend voorberei op die hoërorde-taal- 
denkvaardighede wat hulle hiervoor benodig nie. 

Bostaande situasie laat universiteite met geen ander keuse nie as om meganis- 
mes in werking te stel om ondervoorbereide studente te ondersteun deur aandag 
te skenk aan faktore soos lae akademiese geletterdheidsvlakke wat akademiese 
sukses ná inskrywingbeïnvloed (Botha & Cilliers, 1999:144). Een sodanige mega- 
nisme wat die Universiteit Stellenbosch (US) besluit het om te implementeer, is 
die instelling van akademiese geletterdheidskursusse op verskillende vlakke vir 
alle eerstejaarstudente na gelang van die risikovlak (aldus die akademiese gelet- 
terdheidstoetse) van die studente. Die US Taalsentrum is gevolglik getaak om die 
verantwoordelikheid vir hierdie kursusse te neem. Nog nie alle fakulteite het dit 
geïmplementeer nie, want die besluit is relatief onlangs eers geneem. 


3. Probleemstelling en voorgestelde oplossing 

Die infasering van kredietdraende akademiese geletterdheidskursusse wat stu- 
dente onder andere bemagtig om op ten minste 'n akademies aanvaarbare vlak te 
lees en skryf, bring egter 'n bepaalde problematiekmee. 

Sekere fakulteite (Natuurwetenskappe en Ekonomiese en Bestuurswetenskap- 
pe) se eerstejaarsgetalle is byvoorbeeld so groot dat meer as een dosent in een 
periodegleuf sal moet klasgee, of dat lesings herhaal sal moet word, wat logisties 
en finansieel gesproke nie haalbaar is nie. Hierdie verskynsel gaan toeneem na- 
mate alle fakulteite bedien moet word. Verder staan fakulteite oor die algemeen 
kwalik twee kontaksessies per week af aan hierdie soort steunkursusse, weens eie 
oorvol programme. Gevolglik kan konsepte bespreek en toegepas word, maar tyd 
en geleentheid vir uiters noodsaaklike vaslegging/inoefening ontbreek. 

Die Taalsentrum moes derhalwe op kreatiewe maniere met oplossings voren- 
dagkom. 'n Moontlikheid was die ontwikkeling van 'n aanvullende kursusver- 
wante e-leerprogram wat konsepte en vaardighede, ingelei en gedemonstreer 
tydens kontaklesings, verder inoefen en evalueer. Sodoende sou ten minste die 
doseerlas (alhoewel nie die totale werklas nie) van dosente verlig kon word. 

Die volgende drie voordele van e-leer lê aan die hart van hierdie moontlike 
oplossing: 


158 


Ensovoort: jaarganc 1 1 , nommer 2, 2007 


• Alhoewel die ontwikkeling van e-leermateriaal eerstens geweldig ar- 
beidsintensief is, is een van die grootste voordele die potensiaal daarvan 
om na baie studente uitgebrei te kan word. 

• Omdat akademiese geletterdheidskursusse ten tweede aan so 'n groot ge- 
deelte van die studentebevolking gebied word, sou dit dus sinvol wees 
om deur 'n loodsstudie te bepaal of dit prakties uitvoerbaar is om e-leerkom- 
ponente ook in ander modules te gebruik (die voordeel van die ekonomie 
van skaal dus). 

• Derdens was Prensky (2001) die eerste persoon om na hedendaagse stu- 
dente as sogenaamde digitale outochtone ("digital natives") te verwys. 
Selfs die student wat in 'n huis grootword sonder elektrisiteit, het byvoor- 
beeld dikwels toegang tot 'n selfoon - al word dit met 'n motorbattery 
gelaai. Daar sou dus verwag kon word dat studente min aanpassingspro- 
bleme sal ondervind met 'n aanvullende e-leerprogram. Meer nog: omdat 
studente oor die algemeen tuis voel met inligtingstegnologie, kan dit as 
kragtige hulpmiddel ingespan word om hulle bekend te maak met die 
"vreemde taal" (akademiese geletterdheid). 

E-leer maak ook fleksieleer in terme van tyd, plek en selfs inhoud (tema, vlak, 
ens.) moontlik (Collis & Nikolova, 1998). Die opvoedkundige basis van die fleksie- 
leerbeginsel is die konsep van geïndividualiseerde leer omdat die rekenaar as 
outentieke leerinstrument onmiddellike toepaslike terugvoer kan gee in reaksie 
op 'n spesifieke individuele student se respons. 


4. Navorsingsaanpak 

Hierdie artikel fokus op die loodsprojek wat die Taalsentrum in 2006 van stapel 
gestuur het met die doel om te bepaal of die voorgestelde oplossing vir die pro- 
bleem haalbaar is of nie. Enkele navorsingsvrae is in dié verband gestel - vraag 1 
vereis kwantitatiewe data terwyl vrae 2 tot 8 deur kwalitatiewe data beantwoord 
word. 

4.1 Navorsingsvrae 

Vraag 1: Sal studente wat vakinhoud moet bemeester deur e-leer in plaas 

van gewone kontakonderrig swakker, beter of dieselfde daaraan toe 
wees as studente in die kontakonderrigsituasie? 

Vraag 2: Watter inhoudelike en rekenaartegniese programaspekte moet 
aangepas word vir meer doeltreffende leer? 

Vraag 3: Wat is die ideale kombinasie van kontalc- en rekenaaronderrig vir 'n 
volle geletterdheidskursus? 

Vraag4: Kan e-leer oplossings bied vir die probleem van 'n toenemende 
aantal taaldienste, maar beperkte personeel en kontaktyd? 


Ensovoort: jaargang 1 \, nommer 2, 2007 


1 59 


Vraag 5: Wat kan met e-leer bereik word wat nie moontlik is met kontak- 
onderrig nie? 

Vraag 6: Spreek e-leer inderdaad tot die sogenaamde digitale outochtone? 

Vraag 7: Is uitbreiding van e-leerinhoud van akademiese geletterdheids- 

kursusse van een fakulteit na 'n ander moontlik ten einde voordeel 
uit die ekonomie van skaal te trek? 

Vraag 8: Kan e-leer fleksieleer bevorder? 

4.2 Navorsingsmetodologie 

Die navorsingsmetodologie wat gevolg is, was 'n eksperiment binne die raam werk 
van 'n e-leerloodsprojek wat as basis gedien het vir die genoemde navorsingsfod. 
Die loodsprojekis vergestalt in 'n akademiese geletterdheidskursus (met die fokus 
op leesontwikkeling) vir eerstejaarstudente. Die kritieke uitkoms van die kursus 
is die ontwikkeling van strategiese leesvermoë deur die toepassing van sekere 
hoërorde-taaldenkvaardighede. Hiervoor lees die studente talryke tekste waarom- 
heen hulle probleme moet identifiseer en oplos, asook inligting versamel, orden 
en evalueer. So moes hullebyvoorbeeld 

• die onbewuste leesreaksies (antisipasie, hipotese, assosiasie, verbandleg- 
ging en afleiding) bewustelik as denkvaardighede kan gebruik omdat hulle 
bewus is van die invloed van hul leserskemata en waardesisteem op teks- 
interpretasie; 

• 'n oorhoofse leesstrategie ontwikkel vir sinvolle interpretasie van tekste se 
betekenis op paragraaf- en volteksvlak deur uit die aantal leestegnieke die 
gepaste een of kombinasie daarvan te kies, soos bepaal deur die leesdoel; 

• die kleiner verband tussen temasinne en steunsinne in een paragraaf, asook 
die oorhoofse 

• verband tussen groter teksdele met behulp van diskoersleestegnieke ver- 
staan; 

• onderskeid tref tussen tematies relevante en irrelevante inligting; 

• die skiywer se organisatoriese skryfpatrone (byvoorbeeld kronologie, oor- 
eenkoms en kontras, oorsaak en gevolg), asook die kommunikasiefunksies 
van sinne herken; 

• 'n teks effektief noteer om dit uiteindelik sinvol op te som. 

4.3 Teikenkursus en -groep 

Die kursus Verdere Kommunikasievaardighede in Afrikaans 143 (gevorderde vlak) 
van dertien weke met vyf lesings per week is gevolg deur 21 eerstejaarstudente in 
Ingenieurswese. Dit is deur die Eenheid vir Afrikaans van die Taalsentrum in die 
tweede semester van 2006 aangebied. Hierdie groep is geïdentifiseer vir die meer 
gevorderde taalvaardigheidsvlak op grond van hul resultate in twee akademiese 
geletterdheidstoetse, 'n taalprofielbepaler en 'n skryfstuk in die eerste semester. 


160 


Ensovoort: jaargang 1 1 , NOMMER 2, 2007 


Tien studente was Afrikaanssprekend, nege Engelssprekend en twee sprekers 
van 'n Afrikataal. 

4.4 Verloop van eksperiment 

Die studente het kontakonderrig ontvang tot aan die einde van week nege, toe 20 
studente 'n voortoets geskryf het wat die belangrikste kursusuitkomste toets (een 
van die 21 studente het nie opgedaag nie en is dus vir kwantitatiewe navorsings- 
doeleindes uit die eksperiment gelaat). Stratifisering volgens punte het plaas- 
gevind waarna die studente binne die puntestrata ewekansig toegewys is tot 'n 
eksperimentele en 'n kontrolegroep van 10 studente onderskeidelik. In weke tien 
en elf volg eersgenoemde vir vyf periodes per week die e-leersegment in 'n toe- 
geruste rekenaarlaboratorium waar die kurrikuleerder en programmeerder toesig 
hou. Die kontrolegroep gaan voort met die normale kontaksessies. Die inhoud is 
vir albei groepe presies dieselfde: hersiening van die kursusuitkomste wat in 
weke een tot nege onderrig is. Albei groepe hou boek van hul punte verkry in 
elke taak. 

Direk na weke tien en elf skryf die studente 'n natoets waarin die verander- 
likes soos vaardigheidstipes, puntegewig, tekslengte, periodegleuwe en identi- 
teit van toetsafnemers konstant gehou word. Die tekstemas verskil egter doelbe- 
wus om moontlike geheue-oordrag (na slegs twee weke) uit te skakel. Verder is 
die moeilikheidsgraad van die tekste se woordeskat en sinsbou doelbewus ver- 
hoog. In week 13 het die groepe weer verenig vir kontakonderrig. 

4.5 Data-insamelmg 

Data vir die kwantitatiewe navorsingsfokus is verkry deur die voor- en natoets- 
punte van die eksperimentele en kontrolegroepe met mekaar te vergelyk. Kwali- 
tatiewe data vir al die ander navorsingsvrae is verkry deur die volgende middele: 

• 'n leerjoernaal: die e-leergroep het tydens elkeen van die 5 sessies in weke 
10 en 1 1 ná die voltooiing van elke taak daaroor kommentaar gelewer onder 
die hoofde Plus, Minus en Interessant, asook die punt vir elke taak 
aangeteken; 

• mondelinge insette en advies van die e-leergroep tydens weke 10 en 11, 
veral oor programmeringskwessies; 

• waamemingsnotas deur kurrikuleerder en programmeerder tydens hul 
toesig oor die e-leergroep; en 

• 'n e-leervraelys, waarin die e-leergroep hul ervarings op twee wyses moes 
uitdruk: hulle kon eerstens hul ervarings van die e-leer- teenoor die kon- 
takonderrigsituasie deur persentasies uitdruk en skriftelik motiveer waarom 
hulle die spesifieke punt toegeken het, en tweedens het hul kommentaar 
gelewer op semi-gestmktureerde vrae. 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


161 


Tabel 1: Rekenkundige gemiddeld van toetsuitslae 
uit ’n roupuntetotaal van 50 

Resultate van voortoets Resultate van natoets 


Eksperimentele groep 
Kontrolegroep 


36.1 

36.6 


vertikale bolkies loon 95% verfrouboarheid intervolle 


e-leergroep 


5. Navorsingsresultate 

5.1 Kwantitatiewe resultate 

Die toetsuitslae van die voor- en natoetse word in Tabel 1 uiteengesit. Die stu- 
dente in die eksperimentele groep (e- 
leer) sowel as die kontrolegroep (kon- 
takonderrig) het oor die algemeen 
swakker in die natoets gevaar. 

Hierdie rou punte is aan statistiese 
analise onderwerp met die volgende 
resultate tot gevolg: volgens die Chi- 
kwadraat-toetse vir normaliteit is ál vier 
stelle data normaal versprei en voldoen 
dit ook aan die vereiste vir homoske- 
dastisiteit. Die data kon dus aan para- 
metriese toetse onderwerp word. Ge- 
volglik is 'n herhaalde waarnemings- 
analise van variansie (ANOVA) ge- 
doen. Hierdie resultate word in Figuur 
1 opgesom. Alhoewel die kontrole- 
groep (kontakonderrig) se punte meer 
gedaal het, is hierdie effek volgens die 
ANOVA nie statisties beduidend nie 
(p=0.77 vir die interaksie-effek). Die 
een groep het dus nie werklik beter 
presteer as die ander nie. Daarom be- 
hoort albei groepe tot dieselfde univer- 26 

sum. 2i) 

Die afleiding uit bostaande analise Voortoets 

is dat die e-leergroep nié benadeel is 
deur e-leerwerk wat kontakonderrig 
vervang nie, want hulle het dieselfde 
presteer as die kontakgroep. In die lig 
van die volgende konteksspesifieke 
eienskappe van die eksperiment is 

hierdie interpretasie egter nie veralgemeenbaar (oordraagbaar na ander kontek- 
ste) nie aangesien 

• slegs één dosent betrokke was by die onderrig van één klasgroep - die 
kwaliteit van die dosent het ook 'n invloed op die resultate; 

• die tydperk tussen voor- en natoets (10 uur in totaal) te kort was om 'n 
opvallende verskil in impak tussen rekenaar- en kontakonderrigmetodiek 
te identifiseer; 


32.7 

32.3 


S 34 



Natoefs 


Figuur 1 : Gemiddelde toetspunte vir voor- en natoetse 
apart vir die twee groepe studente 


162 


Ensovoort: jaargang 1 1, nommer 2, 2007 


• die moontlikheid van die sogenaamde Hawthorne-effek ook in gedagte 
gehou moet word. Dit verwys na die situasie waar 'n eksperimentele groep 
opvallend meer moeite doen om te presteer bloot omdat hulle sielkundig 
gemotiveer word deur die gefokusde aandag wat hulle ontvang. In die 
onderhawige eksperiment sou mens ook die studente se eie mondelinge 
insette en skriftelike terugvoer in die leerjoernaal en die e-leervaelys kon 
byvoeg as sielkundige motiveerder. 

Meer belangrik egter vir die beplanning van die aanvullende e-leerprogram is 
albei groepe se gemiddelde daling van 7,6 persentasiepunte vanaf voortoets na 
natoets. Dit was 'n verwagte tendens in die lig van die doelbewuste verhoogde 
moeilikheidsvlak van die toets. Die doel van laasgenoemde was om te bepaal of 
die volle kursusontwerp (weke 1-9 plus hersieningsweke 10 en 11) die studente 
voldoende voorberei het op die effektiewe gebruik van hulpstrategieë (soos kon- 
teks-, morfologiese en sintaktiese leidrade) om die betekenis van abstrakte akade- 
miese woordeskat korrek af te lei. Die toetsstatistiekbevestig die vermoede dat 
hierdie voorbereiding nie deeglik genoeg was nie. Tydens die kurrikulering van 
die aanvullende e-leerprogram sal hierdie aspek van akademiese geletterdheid 
meer aandag geniet (verwys ook Afdelings 5 en 6). 

5.2 Kwalitatiewe resultate 

Vir die betrokke navorsers vorm hierdie resultate die belangrikste aspek van die 
loodsprojek omdat dit bepaalde perspektiewe rondom navorsingsvrae 2 tot 8 
belig het. Veral het dit ook besluite bepaal oor die inhoud en omvang van die 
aanvullende e-leerprogram wat in 2007/8 gekurrikuleer en geprogrammeer word. 
Slegs die mees opvallende herhalende 

antwoordpatrone (deur 40% en meer van die e-leergroep) uit die twee skrifte- 
like bronne - vraelys en leerjoemaal - word as relevant beskou en daarom ver- 
meld. Let veral op die kruisvaliderings van response binne die e-leervraelys self 
en dan ook tussen vraelys en leerjoernaal. 

5.2.1 Die e-leervraelys 

In vrae 1, 3, 5 en 6 van die vraelys moes studente hul ervarings in persentasies 
uitdruk en skriftelik motiveer terwyl hulle in Vraag 2 ses onvolledige stellings 
skriftelik moes voltooi. Vraag 4 se resultate is nie gebruik nie omdat die studente 
die vraag op uiteenlopende wyses geïnterpreteer het en die data dus onbetrou- 
baar is. Let wel: die onderstaande bespreking geskied om strategiese redes nie in 
numeriese vraagvolgorde nie; die sin daarvan sal wel duidelikblyk. 


Ensovoort: iaargang 1 1 , nommer 2, 2007 


163 


Vraag 1: Gee 'n algemene indruk van die e-leerervaring 
Die groep se algemene indrukspunt van die leerervaring is aan die hand van 'n 
tabel met omskryfde kategorieë (Uitstekend = 80-100%; Goed = 70-79%; Bevredi- 
gend = 60-69%; Gemiddeld = 50-59%; Ondergemiddeld = Onder 50%) uitge- 
druken toon 'nreikwydte van 70% tot90% met'n totale gemiddeld van 77,2%. 

Vraag 5: In hierdie afdeling word jy gevra om 'n ideale onderrigleersituasie 
saam te stel vir hierdie module. Verbeel jou jy het die reg om self die 
keuses te maak. Hoe sou jou ideaal daar uitsien? Kies die leer- 
ervarings wat jy vir hierdie module sou insluit. Skryf teenoor 
elkeen die persentasie tyd wat jy aan die ervaringsou toeken. 

Die leerervarings wat in hierdie keuselys gebied word, is in lyn met die instruk- 
sionele komponente vir rekenaargebaseerde fleksieleer, soos gedefinieer deur 
Collis en Nikolova (1998). Die studente se positiewe indrukspunt in Vraag 1 
word in hierdie vraag gevalideer deur hul samestelling van 'n ideale onderrig- 
leersituasie vir die volle kursus volgens hul keuses uit 'n tabel met ses moontlike 
onderrigleersituasies. Aan elke ervaring van hul keuse moes hulle 'n persentasie 
van die onderrigleertyd toeken sodat die totaal 100% uitmaak. Figuur 2 stel die 
drie ervarings onder elkeen van die twee oorhoofse kategorieë - klastyd en e-Ieer 
-statisties voor. 

Die volgende twee voorkeurtendense is 
besonder opvallend: 

(i) Die leerders verkies om 57,7% tyd aan | 
e-leer te spandeer en 42,3% aan klastyd. = 

(ii) Binne hierdie twee oorhoofse katego- 1 
rieë verkies hu lle as ideale onderrigleer- £ 
situasie 'n kombinasie van die twee f 
hoogste voorkeurposisies: gemiddeld f 
27,5% vir e-leer en individuele werk 
met dosent aanwesig en 21,2% vir klas- 
tyd en die lesing. 

Fi 

Uit bostaande kan afgelei word dat hierdie 
spesifieke groep studente 81,9% van die totale 
onderrigtyd in die teenwoordigheid van die dosent wil deurbring. Met die loods- 
projek se kemprobleemstelling in gedagte - toenemende aantal taaldienste maar 
beperkte kontaktyd en personeel - is dit nodig om alternatiewe metodes van 
kontak tussen dosent en student te ondersoek. Personeeltekorte is oor die alge- 
meen 'n realiteit op kampusse landswyd en internasionaal sodat slegs 50% van 
die totale onderrigtyd in die teenwoordigheid van die dosent of tutor kan plaas- 


Klastyd: Lesing deur dosent (21 .2%) 


Klastyd: Groepweik tydens lesing (7.9%) 
Klasfyd: Lindividuele werk tydens lesing (13.2%) 


e-Leer: Poarwerk - dosenl aanwesig (12.1%) 


e-leer: Individuele werk - dosent nanwesig (27-5%) 


-leer: Fleksietyd - geen dosent (18.1%) 


iguur 2: Die ideale onderrigleersituasie: voorkeur- 
tydsbenutting 





164 


Ensovoort: jaarcang 1 1 , nommer 2, 2007 


vind - let daarop dat die outeurs nie hierdeur die opinie huldig dat groter stu- 
dent-dosent-verhoudings die enigste oplossing vir die probleem is nie. Die oplos- 
sing word vanuit die perspektief van akademiese bestuurders juis gesoek in 'n 
kombinasie van kontak- en rekenaaronderrig wat in die onderste leerervaring 
van die tabel verteenwoordig is, naamlik e-leerfleksietyd. As geen dosent teen- 
woordig is nie, kan die student dus enige tyd en op enige plek met die aanvul- 
lende oefeninge werk. Daar is twee voordele hieraan verbonde: in die eerste plek 
bring dit verligting vir die dosent; tweedens bied dit die student die geleentheid 
om op sy/haar eie tyd, plek en tempo te leer (Collis & Nikolova, 1998). Daarom is 
dit verblydend dat die studente wel aan hierdie leerervaring die tweede voorkeur- 
posisie gegee het binne die e-leerkategorie. Die studente se duidelike voorkeur 
vir individuele bo groepwerk (geldig vir beide kontak- en e-leersituasies) behoort 
ook verreken te word in herkurrikuleringsaktiwiteite. Sodanige ondersoek val 
egter buite die begrensing van hierdie artikel. 

Vraag 6: Gee redes waarom jy die besondere keuse van leerervarings gemaak 
hetin vraag5. 

Die skriftelike redes vir hul voorkeurtydsbenutting valideer die bogenoemde 
tweede voorkeurposisie. Vergelyk die volgende opvallende tendense wat kort- 
liks beskryf sal word. 

Redes vir e-leer : individuele werk met dosent aanwesig 

(i) Met individuele werk op die rekenaar 

- voel hulle nie gedruk om dadelik 'n dosent se vraag te beantwoord 
nie; 

- bring dit die nodige inoefening vir bemeestering van vaardighede; 

- is die rekenaarterugvoer "net so goed soos die dosent s'n"; 

- kan hulle werk teen hul eie individuele spoed. 

(ii) Die dosent as fasiliteerder is nodig om sekere konsepte te verduidelik. Let 
hier op 'n moontlike ambivalensie: hoewel 81,9% van die studente aange- 
dui het dat hulle die onderrigtyd in die teenwoordigheid van die dosent 
wil deurbring, het 50% van hierdie selfde groep aangedui dat hierdie 
voorkeur egter arbitrêr is, omdat hulle ook later sekere vrae aan hom/haar 
sou kon stel. Dit hou vermoedelik verband met die studente se positiewe 
indruk van die kwaliteit van die rekenaarterugvoer as "net so goed soos 
die dosent s'n" en daarom dus die fleksietyd in tweede voorkeurposisie. 

Redes vir klastyd: lesing deur dosent 

'n Eerste verduideliking deur die dosent is noodsaaklik vir begrip van konsepte 
of vaardighede voor die individuele inoefening op die rekenaar. 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


165 


Vraag 3: Gedurende die meeste e-leerinteraksies is teksterugvoer na korrekte 
en verkeerde respons gegee. In watter mate het jy jou toegespits op 
die terugvoer? 

• Na 'n korrekte respons: 

• Na 'n verkeerde respons op 'n vraag met net een kans om te antwoord: 

• Na 'n tweede verkeerde respons op 'n vraag met twee kanse om te ant- 
woord: 

Studente moes aan die hand van 'n tabel met persentasie-kategorieë aantoon in 
watter mate hulle die rekenaarterugvoer aandagtiggelees, geïnterpreteer en toege- 
pas het in daaropvolgende antwoord-strategieë. Na 'n korrekte respons het stu- 
dente gemiddeld 54% aandaggegee. Na 'n verkeerde respons op 'n vraagmet net 
een kans om te antwoord het die toegespitsde aandag dramaties verhoog na 84,4% . 
Dit het selfs nóg verder verhoogna 87,6% as hulle 'n tweede verkeerde respons 
gegee het op 'n vraag waar hulle twee kanse gehad het om te antwoord. Let 
daarop dat die terugvoer in 'n vraag met twee kanse om te antwoord altyd 'n 
ingeboude leidraad het wat elke student lei tot nadenke oor sy/haar fout, asook 'n 
herlees van die betrokke teksdeel om die tweede kans meer effektief te benut. 

Die opvoedkundige voordeel wat hierdie soort geïndividualiseerde 
rekenaarterugvoer het bo konvensionele kontakonderrig spreek dus duidelik uit 
die verhoogde aandagpatroon wat uit die statistiek blyk. In Afdeling 6 sal die 
beginsel meer omvattend bespreek word. 

Vraag 2: Voltooi die volgende sinne aangaande die e-leerervaring 

• Ekhetgehou van ... 

• Ekhetnie gehou van ... 

• Ek het dit maklik gevind dat ... 

• Ekhetditmoeilikgevinddat ... 

• Wat ek sou verander, is . . . 

• Die vlakvantaalgebruikwas ... 

Slegs drie kategorieë binne hierdie vraag het herhalende antwoordpatrone opge- 
lewer en word dus as relevant beskou vir die kwalitatiewe navorsingsfokus. Die 
verskynsel van kruisvalidering van data in ander vrae is in ál drie opvallend en 
word derhalwe uitgelig: 

(i) Ek het gehou van . . . 

- die onmiddellike geïndividualiseerde rekenaarterugvoer; het daar- 
deur "makliker en meer geleer" en"foute dadelikverstaan" (70%); 

- die stilte waarin (ek) "beter kan konsentreer" en dus" beter leer" 
(40%). 


166 


Ensovoort: jaargang 1 1, nommer 2, 2007 


Uit bogenoemde is dit duidelik dat die eerste voorkeur die statistiek oor die 
toename in studente se aandagtige lees van terugvoer (Vraag 3) kruisvalideer. 
Albei voorkeure kruisvalideer ook hulle 57,7%-tydsvoorkeur vir e-leerwerk bo 
die42,3% klastydvoorkeur (Vraag5). 

(ii) Wat ek moeilik gevind het, was . . . 

- akademiese woordeskat (die 5 tweedetaalsprekers) 

- Resvanstudentevulniksinnie. 

(iii) Vlak van taalgebruik, was . . . 

- soms moeilik maar uitdagend danksy woordeboeke (dieselfde 5 
tweedetaalsprekers) 

- regverdig (4 moedertaalsprekers) 

- Een student vul niks in nie. 

Die kruisvalidering tussen hierdie twee kategorieë is opvallend en is bevestig met 
verdere validering wat geskied het aan die hand van die tweedetaalsprekers se 
deurlopende kommentaar in hul leerjoemale, naamlik dat hulle met die woorde- 
skat probleme ondervind terwyl die moedertaalsprekers niks daarvan gemeld het 
nie. Hierdie woordeskatprobleem - veral by abstrakte meer akademiese woorde - 
is 'n bepalende faktor in die algemene daling van 7,6 persentasiepunte vanaf voor- 
toets na natoets. Let egter daarop dat alle studente se punte deel vorm van hierdie 
daling, met ander woorde ook die moedertaalsprekers s'n. Die afleiding lê voor 
die hand: woordbegrip is nie net die resultaat van blote dekodering van woorde 
as 'n spesifieke taal se klank- en skriftekens nie. Veel eerder hang dit saam met 
hoërorde-leesstrategieë wat die effektiewe gebruik van diskoers-, konteks-, mor- 
fologiese en sintaktiese merkers insluit. Dit is 'n oorhoofse leesvaardigheid waarmee 
moedertaalstudente net soveel probleme ondervind. Die lae puntegemiddeld van 
die eksperimentele, asook die kontrolegroep in sekere hoërorde-leestake in die 
vyf oefensessies bevestig hierdie bevinding verder (sien 5.2.2.2). 

5.2.2 Die leerjoernaal 

Studente het moeite gedoen om by elke taak onder die drie hoofde vrywillige 
terugvoer te gee wat dikwels aangevul is deur spontane mondelinge insette. 

5.2.2. 1 Programmeringsaanpassings 

Hierdie aspek van die navorsing kan nie sinvol in die artikel bespreek word 
sonder die leser se insae in die spesifieke taaktipe en programmeringsformaat ter 
sprake nie. Slegs twee voorstelle deur die studente word egter uitgelig, omdat dit 
goeie voorbeelde is van hoe die rekenaar as leerinstrument die swak leser ook 
deur die programmering kan bemagtig. Die volgende twee behoeftes spruit juis 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


167 


uit spanning wat studente ervaar het weens die baie leeswerk van opdragte, 
tekste en terugvoer wat vir studente met leesprobleme baie ontsenuend kan wees: 

(i) Selfs al is die student al besig met die uitvoer van die opdrag, moet die 
oorspronklike opdragformulering steeds (albei in verkleinde formaat) sigbaar 
wees of met die klik van 'n knoppies kan verskyn en verd wyn. 

(ii) By veelkeusestellings van komplekse aard moet die oorspronklike stelling 
en opsies steeds sigbaar wees as die student terugvoer ontvang na sy respons. Die 
rede: die terugvoer verwys dikwels na die inhoud van die stelling en/of gee 'n 
subtiele leidraad as die student 'n tweede kans het om te antwoord. In die lei- 
draad word opsies soms teen mekaar afgespeel. 

5. 2.2.2 Inhoudelike aanpassings by sekere vaardigheidstipes 

Die eksperimentele en kontrolegroepe se punte vir elke taak in elkeen van die vyf 
sessies is saamgevoeg, en die gemiddeld daarvan is uitgewerk. Daarvolgens is die 
vaardigheidstipes in 'n rangorde geplaas van laagna hoog. Hierdie data vorm die 
vertrekpunt vir die ontwerp en programmeringin 2007/8 van die aanvullende e- 
leerprogram. Veral die vaardigheidstipes in die lae sestigs en vyftigs (sien Tabel 2) 
sal met 'n verskeidenheid mikro-oefeninge bygewerk word. 'n Asterisk beteken 
dat die e-leergroep in daardie oefening 'n tweede kans gehad het om te antwoord 
en dan heelwatbeter gevaar het as die klasgroep. Die gemiddelde punt van daar- 
die vaardigheidsoefening sou dus nog laer gewees het as hulle - soos die klas- 
groep - net een antwoordkans gehad het. 


Tabel 2: Problematiese vaardigheidstipes 


Krities lees om geldige en ongeldige afleidings van mekaar te onderskei 

50,5% 

Diskoerslees om tekspatrone en -organisasie te identifiseer 

52,5% 

Analities lees om die tematiese funksie van woorde vas te stel 

52% 

Diskoerslees om uit opsiesinne met komplekse sinsbou en verskuilde herhaling 

die beste temasin te kies 

53% 

Diskoerslees om verbande tussen tematiese sleutelwoorde te identifiseer 

53,4% 

Analities en diskoerslees om in sluitingsoefeninge abstrakte (akademiese) woorde 

korrek te identifiseer 

53,4% 

Soeklees om sekere inligting binne 'n tydsgrens te identifiseer (spoedlees) 

56% 

* Analities en diskoerslees om sinspare met korrekte skakelwoorde te verbind 

58,5% 

* Krities lees om die kommunikasiefunksie van stellings oor die teks te herken 

60,4% 

* Diskoerslees om groter tekseenhede binne ’n kopkaart te identifiseer 

60,5% 


168 


Ensovoort: jaargang 1 1, nommer 2, 2007 


Die leser word herinner aan die studente se probleme met veral abstrakte, 
sogenaamde "hoë woorde", wat die tweedetaalsprekers uitgewys het in Vraag 2 
van die Vraelys, maar wat by al die studente in albei groepe geïdentifiseer is na 
aanleiding van die gemiddelde 7,6 persentasiepuntdaling van voor- na natoets. 
Onderliggend aan die oneffektiewe hantering van bogenoemde vaardigheidstipes 
is dan ook die opvallende gebrek aan abstrakte woordeskat kenmerkend van die 
akademiese konteks. 


6. Terugblik op navorsingsvrae 

Die navorsingsresultate van hierdie studie is duidelike rigtingwysers na die krea- 
tiewe oplossing vir die Universiteit Stellenbosch se probleem van toenemende 
fakultêre taaldienste, maar beperkte kontaktyd en personeel. Terselfdertyd wys 
die resultate ook die rigting aan vir die manier waarop die Taalsentrum die 
voordele van e-leer kan benut om 'n aanvullende kursusverwante e-leerprogram 
te ontwikkel. Die kwantitatiewe voor- en natoetsresultate het statisties uitgewys 
dat die loodsprojek se e-leerkomponent van vyf sessies nie die e-leergebruikers 
enigsins benadeel het nie. Aangesien dit kontaksessies volledig vervang het, spreek 
dit dan vanself dat 'n e-leerprogram wat konsepte of vaardighede aanvullend 
inoefen eers nadat dit ingelei en toegepas is in die gewone lesingsituasie die 
studente nog minder sal benadeel. 

Dit is egter veral by die kwalitatiewe data waar dit duidelik word dat die e- 
leermedium sekere kreatiewe oplossings kan bring om leer te bevorder. Vervol- 
gens word die verskillende navorsingsvrae of aspekte daarvan (in skuinsdruk 
aangedui) as 'n integrale deel van hierdie psigodinamika bespreek. 

Eerstens spreek e-leer inderdaad tot die nuwe digitale outochtoon. Dit is duidelik 
uit die studente se voorkeur vir e-leerwerk (57,7%) bo klastyd (43,3%) as ideale 
onderrigleerkombinasie vir 'n volle geletterdheidskursus. Hul algemene voorkeur vir 
die onmiddellike geïndividualiseerde rekenaarterugvoer wat hulle "makliker en 
meer " laat leer, hul "foute dadelik" laat "verstaan" en boonop "net so goed soos 
die dosent s'n" is, lig die sterk opvoedkundige onderrigrol van die rekenaar as 
outentieke leerinstrument duidelik uit. Vermoedelik is die groep se positiewe 
indrukspunt van 77,2% vir die vyf sessies juis te danke aan die fasiliteringsrol wat 
die rekenaar in hierdie sessies gespeel het. 

Geen aanvullende e-leerprogram sal sonder hierdie fasiliteringsrol slaag nie, 
want dit impliseer 'n baie spesifieke betekenis van die konsep evaluering - in 
Latyn die uitbring ( ex ) van die student se krag ( valere ). Hierdie betekenis word 
geïllustreer in die volgende reaksies van studente op die geïndividualiseerde en 
onmiddellike terugvoer wat die e-medium bied -let op die studente se toename 
in toegespitsde aandag, tussen hakies aangedui: 


Ensovoort: jaargang 1 1 , nommer 2, 2007 


169 


(i) As die student 'n item korrekbeantwoord, reageer die rekenaar met 'n 
positiewe woord (byvoorbeeld Kolskoot!) en verskaf die rede vir die 
korrekte antwoord. [54%] 

(ii) In die onmiddellike terugvoer ná 'n eerste foutiewe respons wys dit die 
korrekte rigting aan metbehulp van leidraadkommentaar waaroor die 
student dan nadink voordat hy/sy 'n tweede keer probeer. [84,4%] 

(iii) As die student dan korrek reageer, reageer die rekenaar soos in (i) hierbo. 

(iv) Na 'n tweede foutiewe respons verskaf die rekenaar die rede daarvoor 
plus die korrekte respons én die rede daarvoor. [87,6%] 

Die statistiese toename in aandag illustreer ten beste die krag van die rekenaar as 
outentieke leerinstrument. Selfs as die student die totale punt vir die oefening 
sien, lig die rekenaar die kriteria of aspekte uit waaraan die student nog aandag 
moet skenk. Die rekenaar onderrig dus steeds terwyl dit evalueer, en lei sodoende 
die student deur 'n proses van nadenke en ontwikkelende insig. Dit is ware 
bemagtiging - inderdaad ver verwyder van die rekenaar as toetsmasjien wat 
bloot 'n akkumulatiewe klaspunt genereer sonder dat studente insig in hul ge- 
breke én hul krag kry. e-Leer bereik inderdaad groter toegespitste aandag van baie 
studente gelyktydig as gevolg van dié intense geïndividualiseerde terugvoer wat prakties 
onuitvoerbaar is in kontakonderrig met groot klasse. 

Die impak van hierdie evalueringskonsep van e-leer op die ontwerp van 'n 
kursus is enorm in terme van kwaliteitsbeplanning en die arbeidsintensiwiteit 
daarvan. Dit vereis van die ontwerper om deurentyd vanuit die leerder se be- 
hoeftes na die lesontwerp te kyk om vas te stel watter inhoudelike en rekenaarteg- 
niese programaspekte aangepas moet word vir tneer doeltreffende inoefening van lees- 
vaardighede. Die volgende vrae dwing die ontwerper om krities te besin: 

• Met watter foutiewe konsepte/vaardighede worstel studente? 

• Waar inmy ontwerp is daar gapings omdat eknie die presiese omvangvan 
die studente se taalprobleem besef het nie? 

Dit is hier waar die opvallende kruisvalidering tussen kwalitatiewe data in die 
vraelys en leerjoernaal en kwantitatiewe data oor lae natoetspunt en punte in 
sekere vaardigheidstipes van onskatbare waarde was (4.2.2.2). Dit het duidelik 
uitgewys dat die ontwerper vir die aanvullende e-leerprogram meer oefeninge 
moet bedink rondom oorhoofse leesstrategieë wat studente se gebrekkige ab- 
strakte woordeskat en onvermoë om betekenis in konteks af te lei - veral in tekste 
met komplekse sinsbou - aanspreek. 

Met die opvoedkundige voordeel van intense geïndividualiseerde onderrig 
aan baie studente gelyktydig via 'n aanvullende e-leerprogram - soos dit in die 
voorafgaande bespreking duidelik geword het - is die kernprobleem van beperkte 
kontaktyd vir akademiese geletterdheidskursusse infakulteite in beginsel dus opgelos. 


1 70 


Ensovoort: jaarcang 1 1 , nommer 2, 2007 


Die sleutelwoord is "geïndividualiseerd", want dit beteken dat elke studerende 
individu - sonder die teenwoordigheid van die dosent in eie tyd en op enige 
plek in een van die vele intemettoegangsgebiede op kampus, in koshuiskamer of 
in 'n rekenaarlaboratorium kan sit en werkmet toegespitsde aandag. 

Daar is egter twee ononderhandelbare voorwaardes vir die sukses van hierdie 
fleksieleerbeginsel: 

(i) die e-leerprogram moet beantwoord aan die opvoedkundige 
evalueringsbeginsel (hierbo bespreek) wat die rekenaar as fasiliteerder 
van die outentieke leerproses voorveronderstel; en 

(ii) die inhoud moet tematies geïntegreer wees met die beperkte aantal 
kontaklesings wat die kursus se vaardighede en konsepte inlei. 

Die leser word ook herinner aan die feit dat die studente in hul terugvoer oor 
voorkeurtydsbenutting vir die ideale onderrigleerkombinasie (Vraag 5 in vrae- 
lys) tweede voorkeur gegee het aan e-leer en fleksietyd en in hul geskrewe motive- 
rings opvallend ambivalent was oor die dosent se teenwoordigheid al dan nie 
(5.2.1). Daar sou dus veralgemeen kon word dat die gemiddelde student kam- 
puswyd nie beswaar sal maak teen fleksieleer op die rekenaar as daar aan boge- 
noemde twee voorwaardes voldoen word nie. 

Die belangrikste kriterium van geslaagde fleksieleer lê opgesluit in die kon- 
sep van gedifferensieerde leer wat die student lei tot verantwoordelike nadenke 
- die basis van geslaagde selfstandige studie (Collis & Nikolova, 1998). Die aanvul- 
lende e-leerprogram se bydrae tot fleksieleer lê dus veral op die vlak van gedifferensieerde 
vaardigheidsvlakke. Die proses verloop soos volg: 'n aantai sessies word ten opsigte 
van inhoud en vaardighede gekoppel aan sekere lesinginhoude. Studente besluit 
self hoe goed hulle die bekendstelling van vaardighede en konsepte in die lesing 
verstaan het. Daarvolgens kies hulle op watter vlak hulle elke sessie aanpak. Die 
sessies begin met 'n voortoets en eindigmet dieselfde inhoud as natoets. Tussen 
hierdie twee produktoetse lê die reeks oefeninge waarin studente deur geïndivi- 
dualiseerde terugvoer geëvalueer en onderrig word. Die program sorg dus vir 'n 
unieke ketting van toetsing (leer as produk), evaluering (leer as proses) en weer 
toetsing. 

Die voortoetspunt dui aan of studente té hoog of té laag mik en dus liewers die 
laer of die hoër vlak moet doen. Die natoetspunt dui aan of studente die reeks 
oefeninge moet herhaal en/of die laer of hoër vlak moet probeer. Alles bly hulle 
eie keuse, want om suksesvol te leer is hulle verantwoordelikheid. Niemand 
dwing hulle om al die oefeninge te doen of dit in 'n spesifieke volgorde te doen 
nie, behalwe as dit vooraf so aangedui word weens die oorsaaklike verband tus 
sen sekere vaardighede, 

Hierdie gedifferensieerde benadering wil veral dié studente wat sukkel met 
akademiese woordeskat en komplekse sinsbou bernagtig. In spoedleesoefeninge 


Ensovoort: ]aargang 1 1, nommer 2, 2007 


171 


kry hulle langer tyd; tekste is korter; en moeilike woorde word deur woordver- 
klarings of in die terugvoer verhelder. Vlak 2-studente se spoedleesoefeninge 
geskied uiteraard in 'n korter tydsbestek, tekste is langer en hulle kry minder 
woordverklarings. Alle studente het ook toegang tot 'n woordgids wat veral 
moeiliker abstrakte woorde en belangrike sleutelkonsepte binne die kur- 
susraamwerk bondig verduidelik. 

Weens die arbeidsintensiewe ontwerp en programmering van hierdie soort 
fleksieleerprogram met sy geïndividualiseerde terugvoer en ingeboude steun- 
meganismes moet die moontlikheid oorweeg word om die volle geïntegreerde akade- 
miese geletterdheidskursus ook na ander fakulteite uit te brei. Die ekonomie van skaal 
(groot aantal studente) sal die kostes verbonde aan ontwerp en programmering 
regverdig. Die vraag is: Is dit prakties uitvoerbaarl 


7. Slotopmerkings en gevolgtrekking 

Die kursus, wat reeds in 2006 in die Ingenieursfakulteit geïmplementeer is as 
semesterkursus, word sedert 2007 ook in die Fakulteit Natuurwetenskappe aange- 
bied as 'n jaarkursus met twee kontaklesings per week. Uitbreidings na alle 
fakulteite is wel prakties uitvoerbaar as gevolg van die generiese aard van die 
kritieke uitkomste en die spesifieke kursusuitkomste van 'n akademiese geletterd- 
heidskursus. Die beginsel van ekonomie van skaal kan dus toegepas word, ten 
einde finansiële voordeel te trek. Hierdie beginsel dryf juis die onmiddellike 
uitvloeisels van die projek, naamlik dat die ontwerper en programmeerder met 
die kwalitatiewe data uit die navorsingsprojek as stimulus die aanvullende e- 
leerprogram sodanig aanpas dat dit na meer fakulteite en moontlik ook ander 
instellings uitgebrei kan word, ongeag die inhoud van die vakdissipline. 

Wat egter nie uit die oog verloor kan word nie, is die waarskynlik negatiewe 
impak van 'n sogenaamde koste-effektiewe benadering (minder dosente en groter 
klasse) op die kwaliteit van onderrig en leer, maar soos reeds genoem kon hierdie 
aspek nie binne die beperkinge van hierdie studie ondersoek word nie. 

Wanneer die rekenaar werklik as fasiliteerder van die student se leerproses 
ingespan word, word die student - as digitale outochtoon - in 'n digitale om- 
gewing ontmoet waarmee hy/sy bekend is. Flierdie bekende omgewing behoort 
daartoe by te dra dat die student se ervaring van akademiese taal as bevreem- 
dende diskoers omgeskakel kan word na akademiese taal as toegangsportaal. 


Bibliografie 

Bencze, J.L. 2005. Constructivism, [Intyds]. Beskikbaar by: http://leo.oise.utoronto.ca/~lbencze/ 
Constructivism.html. Besoek op 9 Desember 2005. 

Blacquiére, A. 1989. Reading for survival: text and the second language student. South African 
Joumal forHigher Education 3(l):73-82. 


172 


Ensovoort: ]aargang 1 1 , nommer 2, 2007 


Blanckenberg, H.C. 1999. Die onderrig van Afrikaans as altematiewe taal in 'n uitkomsgebaseerde 
onderwysbenadering. Ongepubliseerde VDO-Studiegids. Johannesburg: Nasionale Private 
Kolieges. 

Botha, H.L. & Cilliers, C.D. 1999. Preparedness for university study: Designing a thinking skills 
test. SouthAfrican JournalofHigherEducation 13(1):144 — 152. 

Butler, H.G. & Van Dyk, T.J. 2004. An academic English language intervention for firstyear 
engineering students. South African Joumal ofLinguistics 22(1 &2):l-8. 

Collis, B. & Nikolova, 1. 1998. Flexible leaming and design of instruction. British Joumal of 
Educational Technology 29(1 ):59— 72. 

Leibowitz, B. 2001 . Students' prior leaming and their acquisition of academic literacy at a 
multilingual South African university. Ongepubliseerde doktorale proefskrif. Sheffield: 
University of Sheffield. 

Macdonald, C.A. 1990. Crossing the threshold into standard three in black education: The consolidated 
main report oftiie threshold project. Pretoria: Human Sciences Research Council. 

Mamaila, K. 2001 . Rl,3bn spent on dropout students. Star, 15 May 2001 :1. 

McKenzie, K. & Schweitzer, R. 2001 . Who succeeds at university? Factors predicting academic 
performance in first year Australian imiversity students. Higher Education Research and 
Deoelopment 20(1):21— 33. 

Natal Witness. 2004. Illiterate matrics. Natal Witness, 27 September:8. 

Perkins, D.M. 1991 . Improvement of reading and vocabulary skills at the University of Transkei. 
South African Jourruú ofEducation 11 (4):231— 235. 

Pierce, W 2003. Student preparednessforcollege work [Intyds]. Beskikbaar by: http://academic.pg. 
cc.md.us/~wpeirceMCCCTR/ctrep~l.html. Besoekop 24 Maart 2004. 

Prensky, M. 2001. Digital natives, Digital immigrants. On the Horizon. 9(5):1-15. 

Pretorius, E.J. 1995. Reading as an interactive process: Implications for studying through the 
medium of a second language. Communicatio 21(2):33-43. 

Rademeyer, A. 2005a. Te veel studente "hoort nie werklik op universiteit": Tot soveel as 30% van 
eerstejaars het nie vrystelling nie. Die Burger, 14 Mei:4. 

Rademeyer, A. 2005b. Klas van 2008. Insig, 28 Februarie:40. 

Vorster, J. & Reagan, T. 1990. On the lexical development of L1 and L2 speakers. SouthAfrican 
Joumal ofLinguistics 9(3):80 — 84. 

Zamel, V. & Spack, R. (eds.) 1998. Negotiating academic literacies: Teaching and learning across 
languages and cultures. New Jersey: Lawrence Erlbaum. 


Voorskrifte aan skrywers 


Ensovoort ontvang graag artikels, boekbespre- 
kings en skeppende skryfwerk (gedigte, prosa 
en kort dramas) met die oog op publikasie. Die 
fokus val op die Afrikaanse en Nederlandse lite- 
ratuur binne die konteks van sowel die breë 
Suid-Afrikaanse as die Nederlandse samele- 
wings. Bydraes van sosiokulturele, omge- 
wingsletterkundige, toegepaste taalkundige of 
vergelykende aard is ook ter sake. 

Ofskoon bydraes in Afrikaans, Nederlands 
en Engels voorkeur kry, word bydraes in die 
ander tale van Suid-Afrika ook verwelkom. 

Ensovoort is geakkrediteer vir subsidie aan 
universiteite en navorsingsuitsette. Bladgeld 
sal verhaal word van die instansies van skry- 
wers van wie bydraes verskyn wat vir subsi- 
die kwalifiseer. 

Bydraes van hoogstens 6 000 woorde moet 
elektronies aan die redaksie gestuur word en 
in een-en-’n-half spasiëring met Times New 
Roman 12-punt lettertipe met inspringende 
paragrawe en sonder ’n oop reël tussen para- 
grawe getik wees. Artikels moet ook 'n verta- 
ling van die titel en 'n opsomming van 150 tot 
200 woorde in Engels bevat. 

Bydraes word deur minstens twee keurders 
geëvalueer. Hoewel die keurders se verslae aan 
die bydraers gestuur kan word, sal die keur- 
ders se identiteit deurgaans vertrouhk gehou 
word. 

Alle bydraes moet reeds tydens die voor- 
legging gereed wees vir publikasie en behoor- 
lik taalkundig versorg wees. Hoë eise word ten 
opsigte van gehalte en taalkundige afgerond- 
heid gestel. Die redaksie behou hom wel die 
reg voor om bydraes (indien nodig) na evalu- 
ering taalkundig te versorg, maar eerbiedig die 
individualiteit van outeurs sover moontlik. 

Die Harvard-metode moet vir verwysings 
gebruik word. Bronverwysings in die teks 
moet dus die van(ne) van die skrywers(s), jaar 


van publikasie en bladsynommers tussen ha- 
kies bevat, byvoorbeeld (Cloete, 1963:6). 

Die nommers van eindnotas verskyn as 
boskrifte sonder hakies en regs van enige 
leestekens in die teks. Die eindnotas verskyn 
aan die einde van die teks ná die bibliografie. 

Volledige besonderhede van die literatuur 
waarna in die teks verwys word, moet aan die 
einde van die artikel onder die opskrif "Bibli- 
ografie" geplaas word. E>ie literatuur word al- 
fabeties volgens die skrywers se vanne gerang- 
skik. Let op die volgende voorbeelde: 

Cloete, T.T. 1963. Op die woord af: Opstelle oor 
die poësie van N.P. van Wyk Louw. Johan- 
nesburg, Port Elizabeth, Kaapstad en 
Bloemfontein: Nasionale Boekhandel. 
Coetser, Johannes Lodewikus. 1990. Die 
geleentheidsdrama by N.R van Wyk 
Louw. Pretoria: Universiteit van Suid- 
Afrika. (Ongepubliseerde D.Litt. et Phil.- 
proefskrif.) 

Faverey, Hans. 1993. Verzamelde gedichten. 

Amsterdam: De Bezige Bij. 

Marais, Johann Lodewyk. 2005. Die belang- 
rikste Afrikaanse gedig(te). [A]. Beskik- 
baar by: http://www.oulitnet.co.za/poesie/ 
default.asp <Geraadpleeg op 29 Junie 
2007>. 

Van Coller, H.R & Odendaal, B.J. 1999. 

George Weideman (1947-). In: Van Coller, 
H.R (red.). Perspektief en profiel: 'n 
Afrikaanse literatuurgeskiedenis: Deel 2. 
Pretoria: J.L. van Schaik, pp. 764-785. 
Viljoen, Louise. 2003. Die digter as reisiger: 
Twee gedigsiklusse van Leipoldt en Krog. 
Stilet 15(1), Maart, pp. 80-100. 

Menings wat in bydraes uitgespreek word, 
word nie noodwendig deur die redaksie onder- 
skryf nie. 


Instructions to authors 


Ensovoort gladly receives articles, book reviews 
and creative writing (poems, prose and short 
dramas) for publication. The focus is on Afri- 
kaans and Dutch literature in the context of 
both the broad South African and the Dutch 
and Flemish communities. Contributions of 
sociocultural, environmental literature, ap- 
plied linguistics or comparative nature are also 
applicable. 

Although contributions in Afrikaans, EXitch 
and English will receive priority, contributions 
in the other South African languages will also 
be welcome. 

Ensovoort hasbeen accredited for subsidisa- 
tion to universities and research outputs. Page 
money will be recovered from the institutions 
of writers whose contributions qualify for sub- 
sidisation. 

Contributions must not exceed 6 000 words 
and must be sent electronically to the editor- 
ship, typed in one and a halve spacing in Times 
New Roman 12-point type-font with indent- 
ed paragraphs but without a blank line be- 
tween paragraphs. The text of articles must also 
be preceded by a translation of the title and a 
summary of between 150 and 200 words in 
English. 

Contributions will be reviewed by at least 
two referees. Although the referees' reports 
can be sent to contributors, the identity of the 
referees will be kept confidential. 

All contributions must be ready for publi- 
cation when submitted and thoroughly lin- 
guistically edited. High standards with regard 
to quality and linguistic care are expected. The 
editors reserve the right to edit contributions 
after they have been refereed (if necessary), but 
the individuality of authors will be respected. 

The Harvard method must be used for re- 
ferences. References in the text must consist 
of the name(s) of the author(s), year of publi- 


cation and page number(s) between brackets, 
for example (Cloete, 1963:6). 

Numbers of endnotes should be affixed as 
superscripts withoutbrackets and to the right 
of any punctuation marks in the text. The end- 
notes appear at the end of the text after the 
bibliography. 

Full details of the literature referred to in 
the text must appear at the end of the article 
under the heading "Bibliography". The litera- 
ture must be arranged alphabetically accord- 
ing to the authors' surnames. Please note the 
following examples: 

Cloete, T.T. 1963. Op die woord af: Opstelle oor 
die poësie van N.P van Wyk Louw. Johan- 
nesburg, Port Elizabeth, Kaapstad and 
Bloemfontein: Nasionale Boekhandel. 
Coetser, Johannes Lodewikus. 1990. Die 
geleentheidsdrama by N.E van Wyk 
Louw. Pretoria: Universiteit van Suid- 
Afrika. (Unpublished D.Litt. et Phil. 
thesis.) 

Faverey, Hans. 1993. Verzamelde gedichten. 

Amsterdam: De Bezige Bij. 

Marais, Johann Lodewyk. 2005. Die belang- 
rikste Afrikaanse gedig(te). [A]. Available 
at: http://www.oulitnet.co.za/poesie/ 
default.asp <Accessed on 29 June 2007>. 
Van Coller, H.P & Odendaal, B.J. 1999. 

George Weideman (1947-). In: Van Coller, 
H.P (red.). Perspektief en profiel: 'n 
Afrikaanse literatuurgeskiedenis: Deel 2. 
Pretoria: J.L. van Schaik, pp. 764-785. 
Viljoen, Louise. 2003. Die digter as reisiger: 
Twee gedigsiklusse van Leipoldt en Krog. 
Stilet 15(1), Maart, pp. 80-100. 

Opinions that are raised in contributions are 
not necessarily endorced by the editorship. 





Jaargang 11, nommer2, 2007 R50,00 


Academic literacy itself is no longer 
the domain of those who only wish to 
do good or to make some overt 
political statement. Rather, it is a 
growing field involving those who 
intend to go about their business of 
facilitating the language development 
of students in as theoretically and 
socially responsible a way as possible 
... teaching on its own will never be 
sufficient to give an adequate 
response to the needs of the students 
at the receiving end of our language 
instruction: our teaching needs to be 
informed by research of the highest 
quality ... 


ISSN 0257-2036 






»